- The Unsupervised Learning Workshop
- Aaron Jones Christopher Kruger Benjamin Johnston
- 336字
- 2021-06-18 18:12:52
Summary
In this chapter, we discussed how hierarchical clustering works and where it may be best employed. In particular, we discussed various aspects of how clusters can be subjectively chosen through the evaluation of a dendrogram plot. This is a huge advantage over k-means clustering if you have absolutely no idea of what you're looking for in the data. Two key parameters that drive the success of hierarchical clustering were also discussed: the agglomerative versus pisive approach and linkage criteria. Agglomerative clustering takes a bottom-up approach by recursively grouping nearby data together until it results in one large cluster. Divisive clustering takes a top-down approach by starting with the one large cluster and recursively breaking it down until each data point falls into its own cluster. Divisive clustering has the potential to be more accurate since it has a complete view of the data from the start; however, it adds a layer of complexity that can decrease the stability and increase the runtime.
Linkage criteria grapples with the concept of how distance is calculated between candidate clusters. We have explored how centroids can make an appearance again beyond k-means clustering, as well as single and complete linkage criteria. Single linkage finds cluster distances by comparing the closest points in each cluster, while complete linkage finds cluster distances by comparing more distant points in each cluster. With the knowledge that you have gained in this chapter, you are now able to evaluate how both k-means and hierarchical clustering can best fit the challenge that you are working on.
While hierarchical clustering can result in better performance than k-means due to its increased complexity, please remember that more complexity is not always good. Your duty as a practitioner of unsupervised learning is to explore all the options and identify the solution that is both resource-efficient and performant. In the next chapter, we will cover a clustering approach that will serve us best when it comes to highly complex and noisy data: Density-Based Spatial Clustering of Applications with Noise.
- 用“芯”探核:龍芯派開發實戰
- Windows phone 7.5 application development with F#
- Python GUI Programming:A Complete Reference Guide
- 計算機組裝·維護與故障排除
- Deep Learning with PyTorch
- 精選單片機設計與制作30例(第2版)
- Rapid BeagleBoard Prototyping with MATLAB and Simulink
- 筆記本電腦維修實踐教程
- Spring Cloud微服務和分布式系統實踐
- Neural Network Programming with Java(Second Edition)
- 無蘋果不生活:OS X Mountain Lion 隨身寶典
- 單片機原理與技能訓練
- Blender for Video Production Quick Start Guide
- 從企業級開發到云原生微服務:Spring Boot實戰
- Practical Artificial Intelligence and Blockchain