varclust is a package that enables dimension reduction via variables clustering. We assume that each group of variables can be summarized with few latent variables.
It also provides a function to determine number of principal components in PCA.
This tutorial will gently introduce you to usage of package varclust and familiarize with its options.
You can install varclust from github (current development version).
install_github("psobczyk/varclust")
or from CRAN
install.package("varclust")
library(varclust) library(mclust)
Let us consider some real genomic data. We're going to use FactoMineR package data. As they are no longer available online we added them to this package This data consists of two types of variables. First group are gene expression data. The second is RNA data. Please note that it may take few minutes to run the following code:
comp_file_name <- system.file("extdata", "gene.csv", package = "varclust") comp <- read.table(comp_file_name, sep=";", header=T, row.names=1) benchmarkClustering <- c(rep(1, 68), rep(2, 356)) comp <- as.matrix(comp[,-ncol(comp)]) set.seed(2) mlcc.fit <- mlcc.bic(comp, numb.clusters = 1:10, numb.runs = 10, max.dim = 8, greedy = TRUE, estimate.dimensions = TRUE, numb.cores = 1, verbose = FALSE) print(mlcc.fit) plot(mlcc.fit) mclust::adjustedRandIndex(mlcc.fit$segmentation, benchmarkClustering) misclassification(mlcc.fit$segmentation, benchmarkClustering, max(table(benchmarkClustering)), 2) integration(mlcc.fit$segmentation, benchmarkClustering)
Please note that although we use benchmarkClustering as a reference, it is not an oracle. Some variables from expression data can be highly correlated and act together with RNA data.
The algorithm aims to reduce dimensionality of data by clustering variables. It is assumed that variables lie in few low-rank subspaces. Our iterative algorithm recovers their partition as well as estimates number of clusters and dimensions of subspaces. This kind of problem is called Subspace Clustering. For a reference comparing multiple approaches see here.
You should also use mlcc.reps function if you have some apriori knowledge regarding true segmentation. You can enforce starting point
mlcc.fit3 <- mlcc.reps(comp, numb.clusters = 2, numb.runs = 0, max.dim = 8, initial.segmentations = list(benchmarkClustering), numb.cores = 1) print(mlcc.fit3) mclust::adjustedRandIndex(mlcc.fit3$segmentation, benchmarkClustering) misclassification(mlcc.fit3$segmentation, benchmarkClustering, max(table(benchmarkClustering)), 2) integration(mlcc.fit3$segmentation, benchmarkClustering)
Execution time of mlcc.bic depends mainly on:
For a dataset of 1000 variables and 10 clusters computation takes about 8 minutes on Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.