Description Usage Arguments Details Value Author(s) References See Also Examples
Shrinking characteristics of precision matrix estimators. Penalized precision matrix estimation using the ADMM algorithm. Consider the case where X_{1}, ..., X_{n} are iid N_{p}(μ, Σ) and we are tasked with estimating the precision matrix, denoted Ω \equiv Σ^{1}. This function solves the following optimization problem:
\hat{Ω}_{λ} = \arg\min_{Ω \in S_{+}^{p}} ≤ft\{ Tr≤ft(SΩ\right)  \log \det≤ft(Ω \right) + λ≤ft\ A Ω B  C \right\_{1} \right\}
where λ > 0 and we define ≤ft\A \right\_{1} = ∑_{i, j} ≤ft A_{ij} \right.
1 2 3 4 5 6 7 8  shrink(X = NULL, Y = NULL, S = NULL, A = diag(ncol(S)),
B = diag(ncol(S)), C = matrix(0, ncol = ncol(B), nrow = ncol(A)),
nlam = 10, lam.max = NULL, lam.min.ratio = 0.001, lam = NULL,
alpha = 1, path = FALSE, rho = 2, mu = 10, tau.rho = 2,
iter.rho = 10, crit = c("ADMM", "loglik"), tol.abs = 1e04,
tol.rel = 1e04, maxit = 10000, adjmaxit = NULL, K = 5,
crit.cv = c("MSE", "loglik", "penloglik", "AIC", "BIC"), start = c("warm",
"cold"), cores = 1, trace = c("progress", "print", "none"))

X 
option to provide a nxp data matrix. Each row corresponds to a single observation and each column contains n observations of a single feature/variable. 
Y 
option to provide nxr response matrix. Each row corresponds to a single response and each column contains n response of a single feature/response. 
S 
option to provide a pxp sample covariance matrix (denominator n). If argument is 
A 
option to provide userspecified matrix for penalty term. This matrix must have p columns. Defaults to identity matrix. 
B 
option to provide userspecified matrix for penalty term. This matrix must have p rows. Defaults to identity matrix. 
C 
option to provide userspecified matrix for penalty term. This matrix must have nrow(A) rows and ncol(B) columns. Defaults to zero matrix. 
nlam 
number of 
lam.max 
option to specify the maximum 
lam.min.ratio 
smallest 
lam 
option to provide positive tuning parameters for penalty term. This will cause 
alpha 
elastic net mixing parameter contained in [0, 1]. 
path 
option to return the regularization path. This option should be used with extreme care if the dimension is large. If set to TRUE, cores must be set to 1 and errors and optimal tuning parameters will based on the full sample. Defaults to FALSE. 
rho 
initial step size for ADMM algorithm. 
mu 
factor for primal and residual norms in the ADMM algorithm. This will be used to adjust the step size 
tau.rho 
factor in which to increase/decrease step size 
iter.rho 
step size 
crit 
criterion for convergence ( 
tol.abs 
absolute convergence tolerance. Defaults to 1e4. 
tol.rel 
relative convergence tolerance. Defaults to 1e4. 
maxit 
maximum number of iterations. Defaults to 1e4. 
adjmaxit 
adjusted maximum number of iterations. During cross validation this option allows the user to adjust the maximum number of iterations after the first 
K 
specify the number of folds for cross validation. 
crit.cv 
cross validation criterion ( 
start 
specify 
cores 
option to run CV in parallel. Defaults to 
trace 
option to display progress of CV. Choose one of 
For details on the implementation of 'shrink', see the vignette https://mgallow.github.io/SCPME/.
returns class object ADMMsigma
which includes:
Call 
function call. 
Iterations 
number of iterations. 
Tuning 
optimal tuning parameter. 
Lambdas 
grid of lambda values for CV. 
maxit 
maximum number of iterations. 
Omega 
estimated penalized precision matrix. 
Sigma 
estimated covariance matrix from the penalized precision matrix (inverse of Omega). 
Path 
array containing the solution path. Solutions will be ordered in ascending alpha values for each lambda. 
Z 
final sparse update of estimated penalized precision matrix. 
Y 
final dual update. 
rho 
final step size. 
Loglik 
penalized loglikelihood for Omega 
MIN.error 
minimum average cross validation error (cv.crit) for optimal parameters. 
AVG.error 
average cross validation error (cv.crit) across all folds. 
CV.error 
cross validation errors (cv.crit). 
Matt Galloway gall0441@umn.edu
Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, and others. 2011. 'Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.' Foundations and Trends in Machine Learning 3 (1). Now Publishers, Inc.: 1122. https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf
Hu, Yue, Chi, Eric C, amd Allen, Genevera I. 2016. 'ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning.' Splitting Methods in Communication, Imaging, Science, and Engineering. Springer: 433459.
Molstad, Aaron J., and Adam J. Rothman. (2017). 'Shrinking Characteristics of Precision Matrix Estimators. Biometrika.. https://doi.org/10.1093/biomet/asy023
Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lassopenalized Gaussian likelihood precision matrix estimator.'
1 2 3 4 5 6 7 8 9 10 11 12  # generate some data
data = data_gen(n = 100, p = 5, r = 1)
# lasso penalized omega (glasso)
shrink(X = data$X, Y = data$Y)
# lasso penalized beta (print estimated omega)
lam.max = max(abs(t(data$X) %*% data$Y))
(shrink = shrink(X = data$X, Y = data$Y, B = cov(data$X, data$Y), lam.max = lam.max))
# print estimated beta
shrink$Z

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.