Description Usage Arguments Details Value Author(s) References Examples
Hierarchical cluster analysis on a set of dissimilarities and methods for analyzing it.
1 | hclustglad(d, method = "complete", members=NULL)
|
d |
a dissimilarity structure as produced by |
method |
the agglomeration method to be used. This should
be (an unambiguous abbreviation of) one of
|
members |
|
This function performs a hierarchical cluster analysis using a set of dissimilarities for the n objects being clustered. Initially, each object is assigned to its own cluster and then the algorithm proceeds iteratively, at each stage joining the two most similar clusters, continuing until there is just a single cluster. At each stage distances between clusters are recomputed by the Lance–Williams dissimilarity update formula according to the particular clustering method being used.
A number of different clustering methods are provided. Ward's minimum variance method aims at finding compact, spherical clusters. The complete linkage method finds similar clusters. The single linkage method (which is closely related to the minimal spanning tree) adopts a ‘friends of friends’ clustering strategy. The other methods can be regarded as aiming for clusters with characteristics somewhere between the single and complete link methods.
If members!=NULL
, then d
is taken to be a
dissimilarity matrix between clusters instead of dissimilarities
between singletons and members
gives the number of observations
per cluster. This way the hierarchical cluster algorithm can be “started
in the middle of the dendrogram”, e.g., in order to reconstruct the
part of the tree above a cut (see examples). Dissimilarities between
clusters can be efficiently computed (i.e., without hclustglad
itself) only for a limited number of distance/linkage combinations,
the simplest one being squared Euclidean distance and centroid
linkage. In this case the dissimilarities between the clusters are
the squared Euclidean distances between cluster means.
In hierarchical cluster displays, a decision is needed at each merge to
specify which subtree should go on the left and which on the right.
Since, for n observations there are n-1 merges,
there are 2^{(n-1)} possible orderings for the leaves
in a cluster tree, or dendrogram.
The algorithm used in hclustglad
is to order the subtree so that
the tighter cluster is on the left (the last, i.e. most recent,
merge of the left subtree is at a lower value than the last
merge of the right subtree).
Single observations are the tightest clusters possible,
and merges involving two observations place them in order by their
observation sequence number.
An object of class hclust which describes the tree produced by the clustering process. The object is a list with components:
merge |
an n-1 by 2 matrix.
Row i of |
height |
a set of n-1 non-decreasing real values.
The clustering height: that is, the value of
the criterion associated with the clustering
|
order |
a vector giving the permutation of the original
observations suitable for plotting, in the sense that a cluster
plot using this ordering and matrix |
labels |
labels for each of the objects being clustered. |
call |
the call which produced the result. |
method |
the cluster method that has been used. |
dist.method |
the distance that has been used to create |
The hclustglad
function is based an Algorithm
contributed to STATLIB by F. Murtagh.
Everitt, B. (1974). Cluster Analysis. London: Heinemann Educ. Books.
Hartigan, J. A. (1975). Clustering Algorithms. New York: Wiley.
Sneath, P. H. A. and R. R. Sokal (1973). Numerical Taxonomy. San Francisco: Freeman.
Anderberg, M. R. (1973). Cluster Analysis for Applications. Academic Press: New York.
Gordon, A. D. (1999). Classification. Second Edition. London: Chapman and Hall / CRC
Murtagh, F. (1985). “Multidimensional Clustering Algorithms”, in COMPSTAT Lectures 4. Wuerzburg: Physica-Verlag (for algorithmic details of algorithms used).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | data(USArrests)
hc <- hclustglad(dist(USArrests), "ave")
plot(hc)
plot(hc, hang = -1)
## Do the same with centroid clustering and squared Euclidean distance,
## cut the tree into ten clusters and reconstruct the upper part of the
## tree from the cluster centers.
hc <- hclustglad(dist(USArrests)^2, "cen")
memb <- cutree(hc, k = 10)
cent <- NULL
for(k in 1:10){
cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE]))
}
hc1 <- hclustglad(dist(cent)^2, method = "cen", members = table(memb))
opar <- par(mfrow = c(1, 2))
plot(hc, labels = FALSE, hang = -1, main = "Original Tree")
plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters")
par(opar)
|
######################################################################################
Have fun with GLAD
For smoothing it is possible to use either
the AWS algorithm (Polzehl and Spokoiny, 2002,
or the HaarSeg algorithm (Ben-Yaacov and Eldar, Bioinformatics, 2008,
If you use the package with AWS, please cite:
Hupe et al. (Bioinformatics, 2004, and Polzehl and Spokoiny (2002,
If you use the package with HaarSeg, please cite:
Hupe et al. (Bioinformatics, 2004, and (Ben-Yaacov and Eldar, Bioinformatics, 2008,
For fast computation it is recommanded to use
the daglad function with smoothfunc=haarseg
######################################################################################
New options are available in daglad: see help for details.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.