Description Usage Arguments Details Value Author(s) References See Also Examples
Penalized precision matrix estimation using the graphical lasso (glasso) algorithm. Consider the case where X_{1}, ..., X_{n} are iid N_{p}(μ, Σ) and we are tasked with estimating the precision matrix, denoted Ω \equiv Σ^{-1}. This function solves the following optimization problem:
\hat{Ω}_{λ} = \arg\min_{Ω \in S_{+}^{p}} ≤ft\{ Tr≤ft(SΩ\right) - \log \det≤ft(Ω \right) + λ ≤ft\| Ω \right\|_{1} \right\}
where λ > 0 and we define ≤ft\|A \right\|_{1} = ∑_{i, j} ≤ft| A_{ij} \right|.
1 2 3 4 5 |
X |
option to provide a nxp data matrix. Each row corresponds to a single observation and each column contains n observations of a single feature/variable. |
S |
option to provide a pxp sample covariance matrix (denominator n). If argument is |
nlam |
number of |
lam.min.ratio |
smallest |
lam |
option to provide positive tuning parameters for penalty term. This will cause |
diagonal |
option to penalize the diagonal elements of the estimated precision matrix (Ω). Defaults to |
path |
option to return the regularization path. This option should be used with extreme care if the dimension is large. If set to TRUE, cores must be set to 1 and errors and optimal tuning parameters will based on the full sample. Defaults to FALSE. |
tol |
convergence tolerance. Iterations will stop when the average absolute difference in parameter estimates in less than |
maxit |
maximum number of iterations. Defaults to 1e4. |
adjmaxit |
adjusted maximum number of iterations. During cross validation this option allows the user to adjust the maximum number of iterations after the first |
K |
specify the number of folds for cross validation. |
crit.cv |
cross validation criterion ( |
start |
specify |
cores |
option to run CV in parallel. Defaults to |
trace |
option to display progress of CV. Choose one of |
... |
additional arguments to pass to |
For details on the implementation of the 'glasso' function, see Tibshirani's website. http://statweb.stanford.edu/~tibs/glasso/.
returns class object CVglasso
which includes:
Call |
function call. |
Iterations |
number of iterations |
Tuning |
optimal tuning parameters (lam and alpha). |
Lambdas |
grid of lambda values for CV. |
maxit |
maximum number of iterations for outer (blockwise) loop. |
Omega |
estimated penalized precision matrix. |
Sigma |
estimated covariance matrix from the penalized precision matrix (inverse of Omega). |
Path |
array containing the solution path. Solutions will be ordered by ascending lambda values. |
MIN.error |
minimum average cross validation error (cv.crit) for optimal parameters. |
AVG.error |
average cross validation error (cv.crit) across all folds. |
CV.error |
cross validation errors (cv.crit). |
Matt Galloway gall0441@umn.edu
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 'Sparse inverse covariance estimation with the graphical lasso.' Biostatistics 9.3 (2008): 432-441.
Banerjee, Onureen, Ghauoui, Laurent El, and d'Aspremont, Alexandre. 2008. 'Model Selection through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.' Journal of Machine Learning Research 9: 485-516.
Tibshirani, Robert. 1996. 'Regression Shrinkage and Selection via the Lasso.' Journal of the Royal Statistical Society. Series B (Methodological). JSTOR: 267-288.
Meinshausen, Nicolai and Buhlmann, Peter. 2006. 'High-Dimensional Graphs and Variable Selection with the Lasso.' The Annals of Statistics. JSTOR: 1436-1462.
Witten, Daniela M, Friedman, Jerome H, and Simon, Noah. 2011. 'New Insights and Faster computations for the Graphical Lasso.' Journal of Computation and Graphical Statistics. Taylor and Francis: 892-900.
Tibshirani, Robert, Bien, Jacob, Friedman, Jerome, Hastie, Trevor, Simon, Noah, Jonathan, Taylor, and Tibshirani, Ryan J. 'Strong Rules for Discarding Predictors in Lasso-Type Problems.' Journal of the Royal Statistical Society: Series B (Statistical Methodology). Wiley Online Library 74 (2): 245-266.
Ghaoui, Laurent El, Viallon, Vivian, and Rabbani, Tarek. 2010. 'Safe Feature Elimination for the Lasso and Sparse Supervised Learning Problems.' arXiv preprint arXiv: 1009.4219.
Osborne, Michael R, Presnell, Brett, and Turlach, Berwin A. 'On the Lasso and its Dual.' Journal of Computational and Graphical Statistics. Taylor and Francis 9 (2): 319-337.
Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lasso-penalized Gausssian likelihood precision matrix estimator.'
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | # generate data from a sparse matrix
# first compute covariance matrix
S = matrix(0.7, nrow = 5, ncol = 5)
for (i in 1:5){
for (j in 1:5){
S[i, j] = S[i, j]^abs(i - j)
}
}
# generate 100 x 5 matrix with rows drawn from iid N_p(0, S)
Z = matrix(rnorm(100*5), nrow = 100, ncol = 5)
out = eigen(S, symmetric = TRUE)
S.sqrt = out$vectors %*% diag(out$values^0.5)
S.sqrt = S.sqrt %*% t(out$vectors)
X = Z %*% S.sqrt
# lasso penalty CV
CVglasso(X)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.