knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  message = FALSE,
  warning = FALSE
)

Setup

library(workflows)
library(parsnip)

Load libraries:

library(tidyclust)
library(tidyverse)
library(tidymodels)
library(ggforce)
set.seed(838383)

Load and clean a dataset:

data("penguins", package = "modeldata")

penguins <- penguins %>%
  select(bill_length_mm, bill_depth_mm) %>%
  drop_na()


# shuffle rows
penguins <- penguins %>%
  sample_n(nrow(penguins))

At the end of this vignette, you will find a brief overview of the k-means algorithm, as well as some further algorithmic variant details, for those who would like a reference.

k-means specification in {tidyclust}

To specify a k-means model in tidyclust, simply choose a value of num_clusters:

kmeans_spec <- k_means(num_clusters = 3)

kmeans_spec

There are currently two engines: stats::kmeans (default) and ClusterR::KMeans_rcpp.

It is also possible to change the algorithmic details of the implementation, by changing the engine and/or using the corresponding arguments from the engine functions:

kmeans_spec_lloyd <- k_means(num_clusters = 3) %>%
  parsnip::set_engine("stats", algorithm = "Lloyd")

kmeans_spec_cr <- k_means(num_clusters = 3) %>%
  parsnip::set_engine("ClusterR", initializer = "random")

Note that the stats::kmeans and the ClusterR::KMeans_rcpp implementations have very different default settings for the algorithmic details, so it is recommended to be deliberate and explicit in choosing these options. (See the end of this document for detail on the algorithmic options and defaults.)

Fitting k-means models

Once specified, a model may be "fit" to a dataset by providing a formula and data frame in the same manner as a tidymodels model fit.
Note that unlike in supervised modeling, the formula should not include a response variable.

kmeans_fit <- kmeans_spec %>%
  fit(~ bill_length_mm + bill_depth_mm,
    data = penguins
  )

kmeans_fit %>%
  summary()

To access only the results produced by the engine - in this case, stats::kmeans - simply extract the fit from the fitted model object:

kmeans_fit$fit

tidyclust also provides a function, extract_fit_summary(), to produce a list of model summary information in a format that is consistent across all cluster model specifications and engines

kmeans_summary <- kmeans_fit %>%
  extract_fit_summary()

kmeans_summary %>% str()

Cluster assignments and centers

The primary objective of fitting a clustering model is typically to assign observations to clusters. To access these, use extract_cluster_assignment() function:

kmeans_fit %>%
  extract_cluster_assignment()

Note that this function renames clusters in accordance with the standard tidyclust naming convention and ordering: clusters are named "Cluster_1", "Cluster_2", etc. and are numbered by the order they appear in the rows of the training dataset.

To reconcile these standardized cluster labels with the engine output, refer back to the full model fit summary:

tibble(
  orig_labels = kmeans_summary$orig_labels,
  standard_labels = kmeans_summary$cluster_assignments
)

In this example, we see that the cluster labelled "3" by the stats::kmeans engine function - a label that is assigned randomly by this implementation - is the first to appear in the training data, so it is converted to "Cluster_1" in the standardized labels.

Centroids

A secondary output of interest is often the characterization of the clusters; i.e., what data feature trends does each cluster seem to represent? Most commonly, clusters are characterized by their mean values in the predictor space, a.k.a. the centroids.

These can be accessed from the full summary:

kmeans_summary$centroids

They can also be accessed directly from the fitted model with:

kmeans_fit %>%
  extract_centroids()
centroids <- extract_centroids(kmeans_fit)

small_length <- centroids$.cluster[which.min(centroids$bill_length_mm)]
small_depth <- centroids$.cluster[which.min(centroids$bill_depth_mm)]
other <- setdiff(levels(centroids$.cluster), c(small_length, small_depth))

Based on the above output, we might say that r small_length is penguins with smaller bill lengths, r small_depth has smaller bill depths, and r other is penguins with large bills in both dimensions.

Prediction

Since the $k$-means algorithm ultimately assigns training observations to the cluster with the closest centroid, it is natural to "predict" that test observations also belong to the closest centroid cluster.

The predict() function behaves as expected, producing cluster assignment predictions on new data based on distance to the fitted model centroids.

new_penguin <- tibble(
  bill_length_mm = 42,
  bill_depth_mm = 17
)

kmeans_fit %>%
  predict(new_penguin)

To attach all predictions to a dataset as a column, use augment():

kmeans_fit %>%
  augment(penguins)

Metrics

Since clustering is an unsupervised method, with no target/outcome variable, there is no objective notion of predictive success.

However, many common approaches exist for quantifying the quality of a particular cluster partition or structure.

Sum of squared error

One simple metric is the within cluster sum-of-squared error (WSS), which measures the sum of all distances from observations to their cluster center. This is sometimes scaled with the total sum-of-squared error (TSS), the distance from all observations to the global centroid; in particular, the ratio WSS/TSS is often computed. In principle, small values of WSS or of the WSS/TSS ratio suggest that the observations within clusters are closer (more similar) to each other than they are to the other clusters.

The WSS and TSS come "for free" with the model fit summary, or they can be accessed directly from the model fit:

kmeans_summary$sse_within_total_total
kmeans_summary$sse_total

kmeans_fit %>% sse_within_total()
kmeans_fit %>% sse_total()

kmeans_fit %>% sse_ratio()

We can also see the within sum-of-squares by cluster, rather than totalled, with sse_within():

kmeans_fit %>%
  sse_within()

Silhouette

Another common measure of cluster structure is called the silhouette. The silhouette of a single observation is proportional to the average distance from that observation to within-cluster observations minus the average distance to outside-cluster observations; normalized by the greater of these two average.
In principle, a large silhouette (close to 1) suggests that an observation is more similar to those within its cluster than those outside its cluster.

We can average all silhouettes to get a metric for the full clustering fit. Beause the computation of the silhouette depends on the original observation values, a dataset must also be supplied to the function.

kmeans_fit %>%
  silhouette_avg(penguins)

Changing distance measures

These metrics all depend on measuring the distance between points and/or centroids. By default, ordinary Euclidean distance is used. However, it is possible to select a different distance function.

For sum of squares metrics, the distance function supplied must take two arguments (i.e., the observation locations and the centroid locations). For the sihouette metric, the distance function must find pairwise distances from a single matrix (i.e., all pairwise distances between observations).

my_dist_1 <- function(x) {
  philentropy::distance(x, method = "manhattan")
}

my_dist_2 <- function(x, y) {
  philentropy::dist_many_many(x, y, method = "manhattan")
}

kmeans_fit %>% sse_ratio(dist_fun = my_dist_2)

kmeans_fit %>% silhouette_avg(penguins, dist_fun = my_dist_1)

For more on using metrics for cluster model selection, see the Tuning vignette.

Workflows

The workflow structure of tidymodels is also usable with tidyclust objects. In the following example, we try two recipes for clustering penguins by bill dimensions. In the second recipe, we log-scale both predictors before clustering.

penguins_recipe_1 <- recipe(~ bill_length_mm + bill_depth_mm,
  data = penguins
)

penguins_recipe_2 <- recipe(~ bill_length_mm + bill_depth_mm,
  data = penguins
) %>%
  step_log(all_numeric_predictors())

wflow_1 <- workflow() %>%
  add_model(kmeans_spec) %>%
  add_recipe(penguins_recipe_1)

wflow_2 <- workflow() %>%
  add_model(kmeans_spec) %>%
  add_recipe(penguins_recipe_2)

wflow_1 %>%
  fit(penguins) %>%
  extract_centroids()

wflow_2 %>%
  fit(penguins) %>%
  extract_centroids()

A brief introduction to the k-means algorithm

k-means is a method of unsupervised learning that produces a partitioning of observations into k unique clusters. The goal of k-means is to minimize the sum of squared Euclidian distances between observations in a cluster and the centroid, or geometric mean, of that cluster.

In k-means clustering, observed variables (columns) are considered to be locations on axes in multidimensional space. For example, in the plot below, each point represents an observation of one penguin, and the location in 2-dimensional space is determined by the bill length and bill depth of that penguin.

#| fig-alt: "scatter chart. bill_length_mm along the x-axis, bill_depth_mm along the y-axis. 3 vague cluster appears in the point cloud."
penguins %>%
  ggplot(aes(x = bill_length_mm, y = bill_depth_mm)) +
  geom_point() +
  theme_minimal()

A k-means cluster assignment is achieved by iterating to convergence from random initial conditions. The algorithm typically proceeds as follows:

  1. Choose k random observations in the dataset. These locations in space are declared to be the initial centroids.
#| fig-alt: "scatter chart. bill_length_mm along the x-axis, bill_depth_mm along the y-axis. 3 vague cluster appears in the point cloud. 3 random points have been highlighted red, green, and blue."
pens <- penguins %>%
  select(bill_length_mm, bill_depth_mm) %>%
  drop_na()

init <- sample(seq_len(nrow(pens)), 3)

thing <- kmeans(pens, centers = pens[init, ], iter.max = 1)

centers <- data.frame(thing$centers)

pens %>%
  ggplot() +
  geom_point(aes(x = bill_length_mm, y = bill_depth_mm)) +
  geom_point(
    data = pens[init, ],
    aes(
      x = bill_length_mm, y = bill_depth_mm,
      color = c("Cluster 1", "Cluster 2", "Cluster 3")
    ),
    shape = "o", size = 12, stroke = 1
  ) +
  geom_point(
    data = pens[init, ],
    aes(
      x = bill_length_mm, y = bill_depth_mm,
      color = c("Cluster 1", "Cluster 2", "Cluster 3")
    )
  ) +
  theme_minimal() +
  theme(legend.position = "none")
  1. Assign each observation to the nearest centroid.
#| fig-alt: "scatter chart. bill_length_mm along the x-axis, bill_depth_mm along the y-axis. 3 vague cluster appears in the point cloud. Point are colored according to how close they were to the color points."
closest_center <- philentropy::dist_many_many(as.matrix(pens), as.matrix(pens[init, ]), method = "euclidean") %>%
  apply(1, which.min)

pens %>%
  ggplot() +
  geom_point(
    data = pens,
    aes(
      x = bill_length_mm, y = bill_depth_mm,
      color = as.factor(closest_center)
    ),
    size = 4
  ) +
  geom_point(aes(x = bill_length_mm, y = bill_depth_mm)) +
  theme_minimal() +
  theme(legend.position = "none")
  1. Compute the new centroids of each cluster.
#| fig-alt: "scatter chart. bill_length_mm along the x-axis, bill_depth_mm along the y-axis. 3 vague cluster appears in the point cloud. New highlights are placed in the center of the points of the same color in the previous chart."
centers <- pens %>%
  mutate(
    clust = closest_center
  ) %>%
  group_by(clust) %>%
  summarize_all(mean)

pens %>%
  ggplot() +
  geom_point(aes(x = bill_length_mm, y = bill_depth_mm)) +
  geom_point(
    data = centers,
    aes(
      x = bill_length_mm, y = bill_depth_mm,
      color = c("Cluster 1", "Cluster 2", "Cluster 3")
    ),
    shape = "x", size = 12, stroke = 1
  ) +
  geom_point(
    data = centers,
    aes(
      x = bill_length_mm, y = bill_depth_mm,
      color = c("Cluster 1", "Cluster 2", "Cluster 3")
    )
  ) +
  theme_minimal() +
  theme(legend.position = "none")
  1. Repeat steps 2 and 3 until the centroids do not change.
#| fig-alt: "scatter chart. bill_length_mm along the x-axis, bill_depth_mm along the y-axis. 3 vague cluster appears in the point cloud. All points have been colored of the 3 colors, with an ellipse acound each color."
thing <- kmeans(pens, centers = pens[init, ])

centers <- data.frame(thing$centers)

pens %>%
  ggplot() +
  geom_point(
    data = pens,
    aes(
      x = bill_length_mm,
      y = bill_depth_mm,
      color = as.factor(thing$cluster)
    )
  ) +
  geom_mark_ellipse(
    aes(
      x = bill_length_mm,
      y = bill_depth_mm,
      color = as.factor(thing$cluster)
    ),
    expand = unit(1, "mm")
  ) +
  geom_point(
    data = centers,
    aes(
      x = bill_length_mm,
      y = bill_depth_mm,
      color = as.factor(1:3)
    ),
    shape = "x",
    size = 12,
    stroke = 1
  ) +
  geom_point(
    data = pens,
    aes(
      x = bill_length_mm,
      y = bill_depth_mm,
      color = as.factor(thing$cluster)
    )
  ) +
  theme_minimal() +
  theme(legend.position = "none")

Iteration of centroids

There is also some variation between implementations on how the update process takes place.

In the above example, we have shown the common implementation known as the Lloyd or the Forgy method. The update steps are:

  1. Assign all observations to closest centroid.
  2. Recalculate centroids.
  3. Repeat until convergence.

One variant on this approach is the MacQueen method, which updates centroids continually:

  1. Assign one observation to closest centroid.
  2. Recalculate centroids.
  3. Repeat until all observations have been reassigned once.
  4. Repeat until convergence.

A third common variant is the Hartigan-Wong method, which assigns observations based on overall sum of squared errors rather than simply to the closest cluster:

  1. Temporarily assign one observation to one cluster.
  2. Recalculate centroid.
  3. Find all distances from observations to their cluster center (SSE).
  4. Repeat for each cluster.
  5. Permanently assign the observation to the cluster that resulted in the lowest SSE.
  6. Repeat for all observations.
  7. Repeat until convergence.

As with many interactive algorithms, the choice between methods is a choice of complexity versus accuracy. The Hartigan-Wong method generally results in more consistent and human-verified clusterings, and it is the default setting for the stats::kmeans implementation of k-means clustering; although all three algorithms are available as options in this engine. The Lloyd/Forgy method is the most simple and ubiquitous; this is the only method available in the ClusterR package implementation.

Source

Initialization of the k-means algorithm

The k-means algorithm depends on choosing an initial set of cluster centers.

There are three common methods for selecting initial centers:

  1. Random observations: In the example above, we have chosen three random observations to act as our initial centers. This is the most commonly used approach, implemented in the Forgy, Lloyd, and MacQueen methods.

  2. Random partition: The observations are assigned to a cluster uniformly at random. The centroid of each cluster is computed, and these are used as the initial centers. This approach is implemented in the Hartigan-Wong method.

  3. k-means++: Beginning with one random set of the observations, further observations are sampled via probability-weighted sampling until $k$ clusters are formed. The centroids of these clusters are used as the initial centers. (Further detail here)

Because the initial conditions are based on random selection in both approaches, the k-means algorithm is not determinitistic. That is, running the clustering twice on the same data may not result in the same cluster assignments.

It is common to perform the k-means clustering algorithm multiple times, with different random initial conditions, and combine results at the end. This option is controlled by the nstart argument of the stats::kmeans implementation, and the num_init argument of the ClusterR::KMeans_rcpp implementation.



EmilHvitfeldt/celery documentation built on Jan. 31, 2025, 7:04 p.m.