Description Usage Arguments Details Value Note Author(s) References Examples
Let S_{+}^q be the set of q by q symmetric and positive definite matrices and let y_i\in R^q be the measurements of the q responses for the ith subject (i=1,…, n). The model assumes that y_i is a realization of the q-variate random vector
Y_i = μ + β'x_i + \varepsilon_i, \ \ \ \ i=1,…, n
where μ\in R^q is an unknown intercept vector; β\in R^{p\times q} is an unknown regression coefficient matrix; x_i \in R^p is the known vector of values for ith subjects's predictors, and \varepsilon_1,…, \varepsilon_n are n independent copies of a q-variate Normal random vector with mean 0 and unknown inverse covariance matrix Ω \in S_{+}^q.
This function computes penalized likelihood estimates of the unknown parameters μ, β, and Ω. Let \bar y=n^{-1} ∑_{i=1}^n y_i and \bar{x} = n^{-1}∑_{i=1}^n x_i. These estimates are
(\hat{β}, \hatΩ) = \arg\min_{(B, Q)\in R^{p\times q}\times S_{+}^q} ≤ft\{g(B, Q) +λ_1 ≤ft(∑_{j\neq k} |Q_{jk}| + 1(p≥q n) ∑_{j=1}^q |Q_{jj}| \right) + 2λ_{2}∑_{j=1}^p∑_{k=1}^q |B_{jk}|\right\}
and \hatμ=\bar y - \hatβ'\bar x, where
g(B, Q) = {\rm tr}\{n^{-1}(Y-XB)'(Y-XB) Q\}-\log|Q|,
Y\in R^{n\times q} has ith row (y_{i}-\bar y)', and X\in R^{n\times p} has ith row (x_{i}-\bar{x})'.
1 2 3 4 5 6 |
X |
An n by p matrix of the values for the prediction variables.
The ith row of |
Y |
An n by q matrix of the observed responses.
The ith row of |
lam1 |
A single value for λ_1 defined above. This
argument is only used if |
lam2 |
A single value for λ_2 defined above
(or a p by q matrix with (j,k)th entry λ_{2jk}
in which case the penalty 2λ_{2}∑_{j=1}^p∑_{k=1}^q |B_{jk}| becomes
2∑_{j=1}^p∑_{k=1}^q λ_{2jk}|B_{jk}|). This
argument is not used if |
lam1.vec |
A vector of candidate values for λ_1 from which the cross validation procedure
searches: only used when |
lam2.vec |
A vector of candidate values for λ_2 from which the cross validation procedure
searches: only used when |
method |
There are three options:
|
cov.tol |
Convergence tolerance for the glasso algorithm that minimizes the objective function (defined above) with B fixed. |
cov.maxit |
The maximum number of iterations allowed for the glasso algorithm that minimizes the objective function (defined above) with B fixed. |
omega |
A user-supplied fixed value of Q. Only used when
|
maxit.out |
The maximum number of iterations allowed for the outer loop of the exact MRCE algorithm. |
maxit.in |
The maximum number of iterations allowed for the algorithm that minimizes the objective function, defined above, with Ω fixed. |
tol.out |
Convergence tolerance for outer loop of the exact MRCE algorithm. |
tol.in |
Convergence tolerance for the algorithm that minimizes the objective function, defined above, with Ω fixed. |
kfold |
The number of folds to use when |
silent |
Logical: when |
eps |
The algorithm will terminate if the minimum diagonal entry of the current iterate's residual
sample covariance is less than |
standardize |
Logical: should the columns of |
permute |
Logical: when |
Please see Rothman, Levina, and Zhu (2010)
for more information on the algorithm and model.
This version of the software uses the glasso algorithm (Friedman et al., 2008) through the R package glasso
.
If the algorithm is running slowly, track its progress with silent=FALSE
.
In some cases, choosing cov.tol=0.1
and tol.out=1e-10
allows the algorithm to make
faster progress. If one uses a matrix for lam2
, consider setting tol.in=1e-12
.
When p ≥q n, the diagonal of the optimization variable corresponding to the inverse covariance matrix of the error is penalized. Without diagonal penalization, if there exists a \bar B such that the qth column of Y is equal to the qth column of X\bar B, then a global minimizer of the objective function (defined above) does not exist.
The algorithm that minimizes the objective function, defined above,
with Q fixed uses a similar update strategy and termination
criterion to those used by Friedman et al. (2010) in the corresponding R package glmnet
.
A list containing
Bhat |
This is \hatβ \in R^{p\times q} defined above. If |
muhat |
This is the intercept estimate \hatμ \in R^q defined above.
If |
omega |
This is \hatΩ \in S_{+}^q defined above. If |
mx |
This is \bar x \in R^p defined above. |
my |
This is \bar y \in R^q defined above. |
best.lam1 |
The selected value for λ_1 by cross validation. Will be |
best.lam2 |
The selected value for λ_2 by cross validation. Will be |
cv.err |
Cross validation error matrix with |
The algorithm is fastest when λ_1 and λ_2 are large.
Use silent=FALSE
to
check if the algorithm is converging before the total iterations exceeds maxit.out
.
Adam J. Rothman
Rothman, A. J., Levina, E., and Zhu, J. (2010) Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics. 19: 947–962.
Jerome Friedman, Trevor Hastie, Robert Tibshirani (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 432-441.
Jerome Friedman, Trevor Hastie, Robert Tibshirani (2010). Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1), 1-22.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | set.seed(48105)
n=50
p=10
q=5
Omega.inv=diag(q)
for(i in 1:q) for(j in 1:q)
Omega.inv[i,j]=0.7^abs(i-j)
out=eigen(Omega.inv, symmetric=TRUE)
Omega.inv.sqrt=tcrossprod(out$vec*rep(out$val^(0.5), each=q),out$vec)
Omega=tcrossprod(out$vec*rep(out$val^(-1), each=q),out$vec)
X=matrix(rnorm(n*p), nrow=n, ncol=p)
E=matrix(rnorm(n*q), nrow=n, ncol=q)%*%Omega.inv.sqrt
Beta=matrix(rbinom(p*q, size=1, prob=0.1)*runif(p*q, min=1, max=2), nrow=p, ncol=q)
mu=1:q
Y=rep(1,n)%*%t(mu) + X%*%Beta + E
lam1.vec=rev(10^seq(from=-2, to=0, by=0.5))
lam2.vec=rev(10^seq(from=-2, to=0, by=0.5))
cvfit=mrce(Y=Y, X=X, lam1.vec=lam1.vec, lam2.vec=lam2.vec, method="cv")
cvfit
fit=mrce(Y=Y, X=X, lam1=10^(-1.5), lam2=10^(-0.5), method="single")
fit
lam2.mat=1000*(fit$Bhat==0)
refit=mrce(Y=Y, X=X, lam2=lam2.mat, method="fixed.omega", omega=fit$omega, tol.in=1e-12)
refit
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.