HartiganShapes: Hartigan-Wong k-means for 3D shapes

View source: R/HartiganShapes.R

HartiganShapesR Documentation

Hartigan-Wong k-means for 3D shapes


The basic foundation of k-means is that the sample mean is the value that minimizes the Euclidean distance from each point, to the centroid of the cluster to which it belongs. Two fundamental concepts of the statistical shape analysis are the Procrustes mean and the Procrustes distance. Therefore, by integrating the Procrustes mean and the Procrustes distance we can use k-means in the shape analysis context.

The k-means method has been proposed by several scientists in different forms. In computer science and pattern recognition the k-means algorithm is often termed the Lloyd algorithm (see Lloyd (1982)). However, in many texts, the term k-means algorithm is used for certain similar sequential clustering algorithms. Hartigan and Wong (1979) use the term k-means for an algorithm that searches for the locally optimal k-partition by moving points from one cluster to another.

This function allows us to use the Hartigan-Wong version of k-means adapted to deal with 3D shapes. Note that in the generic name of the k-means algorithm, k refers to the number of clusters to search for. To be more specific in the R code, k is referred to as numClust, see next section arguments.





Array with the 3D landmarks of the sample objects. Each row corresponds to an observation, and each column corresponds to a dimension (x,y,z).


Number of clusters.


Number of steps per initialization. Default value is 10.


Number of random initializations (iterations). Default value is 10.


Relative stopping criteria. Default value is 0.0001.


Logical value. If TRUE, this function is used for a simulation study.


Logical value. If TRUE, see next argument initials. If FALSE, they are new random initial values.


If initLl=TRUE, they are the same random initial values used in each iteration of LloydShapes. If initLl=FALSE this argument must be passed simply as an empty vector.


A logical specifying whether to provide descriptive output about the running process.


There have been several attempts to adapt the k-means algorithm in the context of the statistical shape analysis, each one adapting a different version of the k-means algorithm (Amaral et al. (2010), Georgescu (2009)). In Vinue, G. et al. (2014), it is demonstrated that the Lloyd k-means represents a noticeable reduction in the computation involved when the sample size increases, compared with the Hartigan-Wong k-means. We state that Hartigan-Wong should be used in the shape analysis context only for very small samples.


A list with the following elements:

ic1: Optimal clustering.

cases: Anthropometric cases (optimal centers).

vopt: Optimal objective function.

If a simulation study is carried out, the following elements are returned:

ic1: Optimal clustering.

cases: Anthropometric cases (optimal centers).

vopt: Optimal objective function.

compTime: Computational time.

AllRate: Allocation rate.


This function is based on the kmns.m file available from https://github.com/johannesgerer/jburkardt-m/tree/master/asa136


Guillermo Vinue


Vinue, G., Simo, A., and Alemany, S., (2016). The k-means algorithm for 3D shapes with an application to apparel design, Advances in Data Analysis and Classification 10(1), 103–132.

Hartigan, J. A., and Wong, M. A., (1979). A K-Means Clustering Algorithm, Applied Statistics, 100–108.

Lloyd, S. P., (1982). Least Squares Quantization in PCM, IEEE Transactions on Information Theory 28, 129–137.

Amaral, G. J. A., Dore, L. H., Lessa, R. P., and Stosic, B., (2010). k-Means Algorithm in Statistical Shape Analysis, Communications in Statistics - Simulation and Computation 39(5), 1016–1026.

Georgescu, V., (2009). Clustering of Fuzzy Shapes by Integrating Procrustean Metrics and Full Mean Shape Estimation into K-Means Algorithm. In IFSA-EUSFLAT Conference.

Dryden, I. L., and Mardia, K. V., (1998). Statistical Shape Analysis, Wiley, Chichester.

See Also

LloydShapes, trimmedLloydShapes, landmarksSampleSpaSurv, cube8landm, parallelep8landm, cube34landm, parallelep34landm, procGPA, optraShapes, qtranShapes


landmarksNoNa <- na.exclude(landmarksSampleSpaSurv)
#[1] 574 198 
numLandmarks <- (dim(landmarksNoNa)[2]) / 3
#[1] 66
#As a toy example, only the first 20 individuals are used.
landmarksNoNa_First20 <- landmarksNoNa[1:20, ] 
(numIndiv <- dim(landmarksNoNa_First20)[1])
#[1] 20         
array3D <- array3Dlandm(numLandmarks, numIndiv, landmarksNoNa_First20)
#array3D <- array3D[1:10,,] #to reduce computational times.
#calibrate::textxy(array3D[,1,1], array3D[,2,1], labs = 1:numLandmarks, cex = 0.7) 
numClust <- 3 ; algSteps <- 1 ; niter <- 1 ; stopCr <- 0.0001
#For reproducing results, seed for randomness:
#resHA <- HartiganShapes(array3D, numClust, algSteps, niter, stopCr, FALSE, FALSE, c(), FALSE)
initials <- list(c(15,10,1))
resHA <- HartiganShapes(array3D, numClust, algSteps, niter, stopCr, FALSE, TRUE, initials, TRUE)

if (!is.null(resHA)) {
  asig <- resHA$ic1  #table(asig) shows the clustering results.
  prototypes <- anthrCases(resHA)
#Note: For a simulation study, see www.uv.es/vivigui/softw/more_examples.R 

Anthropometry documentation built on March 7, 2023, 6:58 p.m.