knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  eval=TRUE,
  message = FALSE
)

\renewcommand{\vec}[1]{\boldsymbol{#1}}

Introduction

The clustRviz package intends to make fitting and visualizing CARP and CBASS solution paths an easy process. In the QuickStart vignette we provide a quick start guide for basic usage, fitting, and plotting. In this vignette, we build on the basics and provide a more detailed explanation for the variety of options available in clustRviz.

Background

The starting point for CARP is the Convex Clustering [@Hocking:2011; @Chi:2015; @Tan:2015] problem:

[\text{arg min}{U} \frac{1}{2}\|U - X\|_F^2 + \lambda\sum{(i, j) \in \mathcal{E}} w_{ij} \|U_{i\cdot} - U_{j\cdot}\|_q]

where $X$ is an $n \times p$ data matrix, consisting of $p$ measurements on $n$ subjects, $\lambda \in \mathbb{R}{> 0}$ a regularization parameter determining the degree of clustering, and $w{ij}>0$ a weight for each pair of observations; here $\| . \|_F$ and $\| . \|_2$ denote the Frobenius norm and $\ell_q$ norm, respectively. (Typically, we take $q = 2$.)

Briefly, Convex Clustering seeks to find and estimate a matrix $\hat{U} \in \mathbb{R}^{n\times p}$ which is both faithful to the original data (Frobenius norm loss term) and has columns "shrunk" together by the fusion penalty term. The penalty term induces "clusters" of equal columns in $U$ - if the difference is zero, then the columns must be equal - while still maintaining closeness to the original data. The equal columns of $U$ can then be used to assign clusters among the columns of $X$.

Unlike other clustering parameters, $\lambda$ can be varied smoothly. At one end, for zero or very small $\lambda$, the fusion penalty has minimal effect and all the columns of $\hat{U}$ are distinct, implying $n$ different (trivial) clusters. At the other end, for large $\lambda$, all columns of $U$ are fused together into a single "mono-cluster." Between these extremes, every other degree of clustering can be found as $\lambda$ varies. Considered as a function of $\lambda$, the columns of $\hat{U}_{\lambda}$ form continuous paths, which we will visualize below.

To solve the Convex Clustering problem, Chi and Lange [-@Chi:2015] proposed to use the Alternating Direction Method of Multipliers (ADMM) [@Boyd:2011], an iterative optimization scheme. This performs well for a single value of $\lambda$, but can become very expensive if we wish to investigate enough values of $\lambda$ to form a (near-continuous) solution path. To address this, we adapt the Algorithmic Regularization framework proposed by Hu et al. [-@Hu:2016]: this framework takes a single ADMM step, followed by a small increase in $\lambda$. Remarkably, if the increases in $\lambda$ are sufficiently small, it turns out that we can approximate the true solution path in a fraction of the time needed for full optimization. Additionally, because we consider a fine grid of $\lambda$ values, we are able to get a much more accurate sense of the structure of the solution paths. This approach is called Clustering via Algorithmic Regularization Paths or CARP and is implemented in the function of the same name.

This tutorial will focus on convex clustering though the clustRviz package also provides functionality for convex biclustering via the Convex Biclustering via Algorithmic Regularization with Small Steps or CBASS algorithm, which combines algorithmic regularization with the Generalized ADMM proposed by Weylandt [-@Weylandt:2019]. All of the graphics and data manipulation functionality demonstrated below for the CARP function work equally well for the CBASS function.

Data Preparation and Weight Calculation

While the CARP and CBASS functions provides several reasonable default choices for weights, algorithms, etc, it is important to know their details if one wishes to compute more customized clustering choices. Here we examine several of the inputs to CARP in more detail.

Here we use a dataset of presidential speechs obtained from The American Presidency Project to illustrate the use of clustRviz. The presidential speech data set contains the top 75 most variable log-transformed word counts of each US president, aggregated over several speeches. Additional text processing such as removing stop words and stemming have been done via the tm package.

Let's begin by loading our package and the dataset:

library(clustRviz)
data("presidential_speech")
presidential_speech[1:5, 1:5]

As can be seen above, this data already comes to us as a data matrix with row and column labels. This is the best format to provide data to CARP, though if labels are missing, default labels will be automatically created.

Pre-Processing

An important first choice before clustering is whether to center and scale our observations. Column-wise centering is typically appropriate and is done by default, though this can be changed with the X.center argument. The matter of scaling is a bit more delicate and depends on whether the features are in some way directly comparable, in which case the raw scale may be meaningful, or are fundamentally incompatible, in which case it may make sense to remove scale effects. Scaling is controlled by the X.scale argument to CARP and is not implemented for CBASS. If scaling is requested, it is performed by the scale function from base R which performs unbiased ($n-1$) scaling. In the case of the presidental speech dataset, all variables are of the same type and so we will not scale our data matrix.

Clustering Weights

The choice of clustering weights $w_{ij}$ is essential to getting reasonable clustering performance. CARP uses a reasonably robust heuristic that balances statistical and computational performance, but great improvements can be obtained by using bespoke weighting schemes. CARP allows weights to be specified as either a function which returns a weight matrix or a matrix of weights. More details are available in the Weights vignette

Dimension Reduction and Feature Selection

Convex clustering, as implemented in clustRviz, relies on the Euclidean distance (Frobenius norm) between points. As such, it is rather sensitive to "noisy" features. To achieve good performance in high-dimensional (big $p$) problems, it is often necessary to aggressively filter features or perform some sort of dimension reduction before clustering. Future versions of clustRviz will implement the sparse convex clustering method of Wang et al [-@Wang:2018] and the GeCCO+ scheme of Wang and Allen [-@Wang:2019], both of which allow for automatic feature weighting and selection.

clustRviz plots leading principal components by default in visualizations, though this is configurable. Note that the principal components projection is performed at fit time, not at plot time, so if higher principal components are desired, it may be necessary to increase the npcs argument to CARP.

Fitting

clustRviz aims to make it easy to compute the CARP and CBASS solution paths, and to quickly begin exploring the results. For now, we will use the defaults described above to cluster the presidential speech data via the CARP function:

carp_fit <- CARP(presidential_speech)

Once completed, we can examine a brief summary of the fitted object:

carp_fit

The default output gives several useful pieces of information, including:

Each of these can be changed using parameters already discussed. Additionally, the printed output shows the step-size parameter $t$ (default, used here, of 1.05). This means that at each iteration, $\lambda$ is increased by 5%. For finer path approximations, the user can use a smaller value of $t$, though be warned that this increases both computation time and memory usage.

CARP has three additional flags that control optimizer behavior:

More advanced optimizer controls can be set using the clustRviz_options function.

Accessing Clustering Solutions

Once fit, the clustering solution may be examined via three related "accessor" functions:

The interface for these functions is essentially the same for CARP and CBASS objects, though the exact meaning of "centroids" varies between the problems (vectors for CARP and scalars for CBASS). The latter two functions also support a refit flag, which determines whether the convex clustering centroids or the naive centroids (based only on the convex clustering labels) are returned.

For example, we can extract the clustering labels from our carp_fit corresponding to a $k = 2$ cluster solution:

cluster_labels <- get_cluster_labels(carp_fit, k = 2)
head(cluster_labels)

We see a rather inbalanced data set (the "pre-WWII" cluster is much larger):

table(cluster_labels)

Similarly, to get the cluster means, we use the get_cluster_centroids function:

get_cluster_centroids(carp_fit, k = 2)

Since we performed convex clustering here, our centroids are $p$-vectors. By default, the naive centroids are used; if we prefer the exact convex clustering solution, we can pass the refit = FALSE flag:

get_cluster_centroids(carp_fit, k = 2, refit = FALSE)

We can instead supply the percent argument to specify $\lambda$ (or more precisely, $\lambda / \lambda_{\text{max}}$) rather than the numer of clusters explicitly. For example, if we are interested at the clustering solution about $25\%$ of the way along the regularization path:

get_cluster_labels(carp_fit, percent = 0.25)

We see that our data is clearly falls into three clusters.

Simiarly to CARP objects, CBASS clustering solutions may also be extracted via the three accessor functions. The CBASS methods allow one of three parameters to be used to specify the solution:

Other than this, the behavior of get_cluster_labels, and get_clustered_data is roughly the same:

# CBASS Cluster Labels for rows (observations = default)
get_cluster_labels(cbass_fit, percent = 0.85, type = "row")

# CBASS Cluster Labels for columns (features)
get_cluster_labels(cbass_fit, percent = 0.85, type = "col")

# CBASS Solution - naive centroids
get_clustered_data(cbass_fit, percent = 0.85)

# CBASS Solution - convex bi-clustering centroids
get_clustered_data(cbass_fit, percent = 0.85, refit = FALSE)

The get_cluster_centroids function returns a $k_1$-by-$k_2$ matrix, giving the (scalar) centroids at a solution with $k_1$ row clusters and $k_2$ column clusters:

get_cluster_centroids(cbass_fit, percent = 0.85)

Visualizations

clustRviz provides a rich set of visualization tools based on the ggplot2 and plotly libraries. Because clustRviz integrates with these libraries, it is easy to develop custom visualizations based on clustRviz defaults.

Path Plots

The first type of path supported by clustRviz is one obtained by plotting values of $\hat{U}_{\lambda}$ for various values of $\lambda$. Because $\hat{U}$ is a continuous function of $\lambda$, these paths are continuous and make for attractive visualizations. As mentioned above, clustRviz defaults to plotting the first two principal components of the data:

plot(carp_fit, type = "path")

By default, the entire path is shown, but we can specify a number of clusters directly:

plot(carp_fit, type = "path", k = 3)

These plots are standard ggplot objects and can be customized in the usual manner:

plot(carp_fit, type = "path", k = 3) + ggplot2::theme_bw() + 
  ggplot2::ggtitle("My Title", subtitle = "Very custom!")

We can also plot the raw features directly:

plot(carp_fit, type = "path", axis = c("british", "soviet"))

In this space, clear differences in pre-Cold War and post-Cold War topics of presidential speeches are evident. This sort of customized plot can be saved to a file using the standard ggplot2::ggsave functionality.

Because paths are continuous, it is also possible to set the dynamic = TRUE flag to produce a GIF via the gganimate package:

plot(carp_fit, type = "path", dynamic = TRUE)

Finally, interactive path plots can be produced using the plotly library by setting interactive = TRUE:

plot(carp_fit, type = "path", dynamic = TRUE, interactive = TRUE)

These are currently rather slow to render; we hope to improve rendering performance in future versions of clustRviz.

Dendrograms

No discussion of clustering visualization would be complete without dendrograms! clustRviz computes a dendrogram based on the order that observations are fused, with the height of the tree corresponding to the value of $\lambda$ at which the fusion was first detected. As before, both static and dynamic dendrograms are supported

plot(carp_fit, type = "dendrogram")
plot(carp_fit, type = "dendrogram", dynamic = TRUE)

If we specify the number of clusters, a "cut line" is added and the dendrogram is colored according to cluster identity:

plot(carp_fit, type = "dendrogram", k = 3)

As before, this is a ggplot object and can be manipulated as such. If the underlying dendrogram object is needed, it can be accessed directly with the as.dendrogram or as.hclust functions, allowing for tight integration with other clustering functionality:

as.dendrogram(carp_fit)
as.hclust(carp_fit)

As before, an interactive version is also supported

plot(carp_fit, type = "dendrogram", interactive = TRUE)

Heatmaps

Finally, a convex clustering-based cluster heatmap is also supported. Cluster heatmaps are more common for biclustering and we show that here:

cbass_fit <- CBASS(presidential_speech)
plot(cbass_fit, type = "heatmap")

Unlike the classical cluster heatmap, the convex bi-clustering heatmap is formed by simultaneously clustering rows and columns, rather than clustering them independently. This allows us to "cut" the row and column heatmaps at a directly comparable value of $\lambda$:

plot(cbass_fit, type = "heatmap", percent = 0.25)

The dynamic plot is particularly compelling:

plot(cbass_fit, type = "heatmap", dynamic = TRUE)

An interactive heatmap is provided, with help from the excellent heatmaply package, though performance is somewhat limited on larger data sets, so we omit it here.

Discussion

clustRviz provides a integrated framework for fitting and visualizing the CARP and CBASS solution paths. clustRviz delivers fast computation relative traditional Convex Clustering solution techniques, and brings traditional and modern clustering visualization techniques together in a unified framework.

References



jjn13/clustRviz documentation built on Sept. 1, 2020, 7:53 a.m.