Clusters produced by Hierarchical Clustering closely resemble K-means clusters. Actually, there are situations when the outcome is precisely the same as k-means clustering. However, the entire procedure is a little different. Agglomerative and Divisive are the two types. The bottom-up strategy is called aggregative; the opposite is called divisive. Today, my primary focus was on the Agglomerative approach.
I became familiar with dendrograms, where the horizontal axis represents the data points and the vertical axis indicates the Euclidean distance between two points. Therefore, the clusters are more dissimilar the higher the lines. By examining how many lines the threshold cuts in our dendrogram, we can determine what dissimilarity thresholds to set. The largest clusters below the thresholds are the ones we need. In a dendrogram, the threshold is typically located at the greatest vertical distance that you can travel without touching any of the horizontal lines.