mplnVarClassification: Classification Using MPLN Via Variational-EM

View source: R/mplnClassification.R

mplnVarClassificationR Documentation

Classification Using MPLN Via Variational-EM

Description

Performs classification using mixtures of multivariate Poisson-log normal (MPLN) distribution with variational expectation-maximization (EM) for parameter estimation. Model selection is performed using AIC, AIC3, BIC and ICL. No internal parallelization, thus code is run in serial.

Usage

mplnVarClassification(
  dataset,
  membership,
  gmin,
  gmax,
  initMethod = "kmeans",
  nInitIterations = 2,
  normalize = "Yes"
)

Arguments

dataset

A dataset of class matrix and type integer such that rows correspond to observations and columns correspond to variables. The dataset have dimensions n x d, where n is the total number of observations and d is the dimensionality. If rowSums are zero, these rows will be removed prior to cluster analysis.

membership

A numeric vector of length nrow(dataset) containing the cluster membership of each observation. For observations with unknown membership, assign value zero. E.g., for a dataset with 10 observations, and 2 known groups, c(1, 1, 1, 2, 2, 0, 0, 0, 1, 2).

gmin

A positive integer specifying the minimum number of components to be considered in the clustering run. The value should be equal to or greater than max(membership).

gmax

A positive integer, >= gmin, specifying the maximum number of components to be considered in the clustering run.

initMethod

An algorithm for initialization. Current options are "kmeans", "random", "medoids", "clara", or "fanny". Default is "kmeans".

nInitIterations

A positive integer or zero, specifying the number of initialization runs to be performed. This many runs, each with 10 iterations, will be performed via MPLNClust and values from the run with highest log-likelihood will be used as initialization values. Default is 2.

normalize

A string with options "Yes" or "No" specifying if normalization should be performed. Currently, normalization factors are calculated using TMM method of edgeR package. Default is "Yes".

Value

Returns an S3 object of class mplnVariational with results.

  • dataset - The input dataset on which clustering is performed.

  • dimensionality - Dimensionality of the input dataset.

  • normalizationFactors - A vector of normalization factors used for input dataset.

  • gmin - Minimum number of components/clusters considered in the clustering run.

  • gmax - Maximum number of components/clusters considered in the clustering run.

  • initalizationMethod - Method used for initialization.

  • allResults - A list with all results.

  • logLikelihood - A vector with value of final log-likelihoods for each component/cluster size.

  • numbParameters - A vector with number of parameters for each component/cluster size.

  • trueLabels - The vector of true labels, if provided by user.

  • ICLresults - A list with all ICL model selection results.

  • BICresults - A list with all BIC model selection results.

  • AICresults - A list with all AIC model selection results.

  • AIC3results - A list with all AIC3 model selection results.

  • slopeHeuristics - If more than 10 models are considered, slope heuristic results as obtained via capushe::capushe().

  • DjumpModelSelected - If more than 10 models are considered, slope heuristic results as obtained via capushe::capushe().

  • DDSEModelSelected - If more than 10 models are considered, slope heuristic results as obtained via capushe::capushe().

  • totalTime - Total time used for clustering and model selection.

Author(s)

Anjali Silva, anjali@alumni.uoguelph.ca, Sanjeena Dang, sanjeena.dang@carleton.ca.

References

Aitchison, J. and C. H. Ho (1989). The multivariate Poisson-log normal distribution. Biometrika 76.

Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory, New York, NY, USA, pp. 267–281. Springer Verlag.

Biernacki, C., G. Celeux, and G. Govaert (2000). Assessing a mixture model for clustering with the integrated classification likelihood. IEEE Transactions on Pattern Analysis and Machine Intelligence 22.

Bozdogan, H. (1994). Mixture-model cluster analysis using model selection criteria and a new informational measure of complexity. In Proceedings of the First US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach: Volume 2 Multivariate Statistical Modeling, pp. 69–113. Dordrecht: Springer Netherlands.

Robinson, M.D., and Oshlack, A. (2010). A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biology 11, R25.

Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics 6.

Silva, A. et al. (2019). A multivariate Poisson-log normal mixture model for clustering transcriptome sequencing data. BMC Bioinformatics 20. Link

Subedi, S., and R. Browne (2020). A parsimonious family of multivariate Poisson-lognormal distributions for clustering multivariate count data. arXiv preprint arXiv:2004.06857. Link

Examples

# Example 1
trueMu1 <- c(6.5, 6, 6, 6, 6, 6)
trueMu2 <- c(2, 2.5, 2, 2, 2, 2)

trueSigma1 <- diag(6) * 2
trueSigma2 <- diag(6)

# Generating simulated data
sampleData <- MPLNClust::mplnDataGenerator(nObservations = 1000,
                     dimensionality = 6,
                     mixingProportions = c(0.79, 0.21),
                     mu = rbind(trueMu1, trueMu2),
                     sigma = rbind(trueSigma1, trueSigma2),
                     produceImage = "No")

# Classification
membershipInfo <- sampleData$trueMembership
length(membershipInfo) # 1000 observations

# Assume membership of 200 of 1000 observations were unknown
set.seed(1234)
randomNumb <- sample(1:length(membershipInfo), 200, replace = FALSE)
membershipInfo[randomNumb] <- 0
table(membershipInfo)
#   0   1   2
# 200 593 207

# Run for g = 2:3 groups
# mplnClassificationResults <- MPLNClust::mplnVarClassification(
#                                dataset = sampleData$dataset,
#                                membership = membershipInfo,
#                                gmin = 2,
#                                gmax = 3,
#                                initMethod = "kmeans",
#                                nInitIterations = 2,
#                                normalize = "Yes")
#names(mplnClassificationResults)





# Example 2
# Use an external dataset
if (requireNamespace("MBCluster.Seq", quietly = TRUE)) {
library(MBCluster.Seq)
data("Count")
dim(Count) # 1000    8

# Clustering subset of data
subsetCountData <- as.matrix(Count[c(1:500), ])
mplnResultsMBClust <- MPLNClust::mplnVariational(
                            dataset = subsetCountData,
                            membership = "none",
                            gmin = 1,
                            gmax = 3,
                            initMethod = "kmeans",
                            nInitIterations = 2,
                            normalize = "Yes")
names(mplnResultsMBClust)


# Classification
# Using 500 labels from clustering above, classify rest of 500 observations
membershipInfo <- c(mplnResultsMBClust$BICresults$BICmodelSelectedLabels,
                    rep(0, 500))
max(mplnResultsMBClust$BICresults$BICmodelSelectedLabels) # 2
# Assume there maybe 2 to 4 underlying groups
# mplnClassificationResults <- MPLNClust::mplnVarClassification(
#                               dataset = Count,
#                               membership = membershipInfo,
#                               gmin = 1,
#                               gmax = 4,
#                               initMethod = "kmeans",
#                               nInitIterations = 2,
#                               normalize = "Yes")
# names(mplnClassificationResults)


}


anjalisilva/MPLNClust documentation built on Jan. 28, 2024, 11:02 a.m.