clustering
層次化聚類可以使用樹圖表示。
自頂向下: 所有節點當做同一類, 然后逐層劃分
自底向上: 每個節點都是獨立的類, 然后逐層合并
?
其中需要用到兩個距離函數, 用來識別“相似”:
1 metric: N范式、高維向量夾角衡量點與點之間的相似度
2 linkage:衡量類與類之間的相似度:
?2.1) max{d(x,h): x in A, y in B}
?2.2) min{d(x,h): x in A, y in B}
?2.3) sigma(d(x,y))/(|A|*|B|), 均值, 類間所有點的距離之和的均值
?以下幾個不甚明白
- The sum of all intra-cluster variance.
- The increase in variance for the cluster being merged (Ward's criterion).
- The probability that candidate clusters spawn from the same distribution function (V-linkage).
?
在樹的每一層都是一種聚類結果及對應的類個數
?
?
http://en.wikipedia.org/wiki/Hierarchical_clustering
?
?
基于劃分
k-means, 優勢是算法簡單且快,可以處理大數據量;
缺點是每次算法過程得到的結果并不一定相同,取決于初始的隨機k個質點;最小化了類內的方差,但不保證全局的最小方差; 并且要求均值是可定義的有意義的(質點是用均值計算得到的)【當均值無意義時, 可以使用k-medoids代替, 該算法選取中位點作為質點】
?
模糊c-means: 點可以概率性的屬于多個類
?
QT clustering(quality threshold), 算法流程:
- The user chooses a maximum diameter for clusters.
- Build a candidate cluster for each point by iteratively including the point that is closest to the group, until the diameter of the cluster surpasses the threshold.
- Save the candidate cluster with the most points as the first true cluster, and remove all points in the cluster from further consideration. Must clarify what happens if more than 1 cluster has the maximum number of points??
- Recurse with the reduced set of points.
The distance between a point and a group of points is computed using complete linkage, i.e. as the maximum distance from the point to any member of the group (see the "Agglomerative hierarchical clustering" section about distance between clusters).
?
spectral clustering:
?
?
總結
以上是生活随笔為你收集整理的clustering的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 正确的 send recv 行为
- 下一篇: simple k means