mglasso | R Documentation |
Cluster variables using L2 fusion penalty and simultaneously estimates a gaussian graphical model structure with the addition of L1 sparsity penalty.
mglasso( x, lambda1 = 0, fuse_thresh = 0.001, maxit = NULL, distance = c("euclidean", "relative"), lambda2_start = 1e-04, lambda2_factor = 1.5, precision = 0.01, weights_ = NULL, type = c("initial"), compact = TRUE, verbose = FALSE )
x |
Numeric matrix (n x p). Multivariate normal sample with n independent observations. |
lambda1 |
Positive numeric scalar. Lasso penalty. |
fuse_thresh |
Positive numeric scalar. Threshold for clusters fusion. |
maxit |
Integer scalar. Maximum number of iterations. |
distance |
Character. Distance between regression vectors with permutation on symmetric coefficients. |
lambda2_start |
Numeric scalar. Starting value for fused-group Lasso penalty (clustering penalty). |
lambda2_factor |
Numeric scalar. Step used to update fused-group Lasso penalty in a multiplicative way.. |
precision |
Tolerance for the stopping criterion (duality gap). |
weights_ |
Matrix of weights. |
type |
If "initial" use classical version of MGLasso without weights. |
compact |
Logical scalar. If TRUE, only save results when previous clusters are different from current. |
verbose |
Logical scalar. Print trace. Default value is FALSE. |
Estimates a gaussian graphical model structure while hierarchically grouping variables by optimizing a pseudo-likelihood function combining Lasso and fuse-group Lasso penalties. The problem is solved via the COntinuation with NEsterov smoothing in a Shrinkage-Thresholding Algorithm (Hadj-Selem et al. 2018). Varying the fusion penalty λ_2 in a multiplicative fashion allow to uncover a seemingly hierarchical clustering structure. For λ_2 = 0, the approach is theoretically equivalent to the Meinshausen-Bühlmann (2006) neighborhood selection as resuming to the optimization of pseudo-likelihood function with \ell_1 penalty (Rocha et al., 2008). The algorithm stops when all the variables have merged into one cluster. The criterion used to merge clusters is the \ell_2-norm distance between regression vectors.
For each iteration of the algorithm, the following function is optimized :
1/2 ∑_{i=1}^p || X^i - X^{\ i} β^i ||_2 ^2 + λ_1 ∑_{i = 1}^p || β^i ||_1 + λ_2 ∑_{i < j} || β^i - τ_{ij}(β^j) ||_2.
where β^i is the vector of coefficients obtained after regression X^i on the others and τ_{ij} is a permutation exchanging β_j^i with β_i^j.
A list-like object of class mglasso
is returned.
out |
List of lists. Each element of the list corresponds to a
clustering level. An element named |
l1 |
the sparsity penalty |
conesta()
for the problem solver,
plot_mglasso()
for plotting the output of mglasso
.
## Not run: reticulate::use_condaenv("rmglasso", required = TRUE) n = 50 K = 3 p = 9 rho = 0.85 blocs <- list() for (j in 1:K) { bloc <- matrix(rho, nrow = p/K, ncol = p/K) for(i in 1:(p/K)) { bloc[i,i] <- 1 } blocs[[j]] <- bloc } mat.covariance <- Matrix::bdiag(blocs) mat.covariance set.seed(11) X <- mvtnorm::rmvnorm(n, mean = rep(0,p), sigma = as.matrix(mat.covariance)) X <- scale(X) res <- mglasso(X, 0.1, lambda2_start = 0.1) res$out[[1]]$clusters res$out[[1]]$beta ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.