Description Usage Arguments Details Value Examples
Performs a L1-penalization in linear mixed models
1 |
data |
Input matrix of dimension n * p; each row is an observation vector. The intercept should be included in the first column as (1,...,1). If not, it is added. |
Y |
Response variable of length n. |
z |
Random effects matrix. Of size n*q. |
grp |
Grouping variable of length n. |
D |
Logical value. If TRUE, the random effects are considered to be independent, i.e. |
mu |
Positive regularization number to be used for the Lasso. |
step |
The algorithm performs at most |
fix |
Number of variables which are not submitted to selection. They have to be in the first columns of data. Default is 1, the selection is not performed on the intercept. |
rand |
A vector of length q: each entry k is the position of the random effects number k in the data matrix, 0 otherwise. If z contains variables that have both a fixed and a random effect, it is advised to not submit them to selection. |
penalty.factor |
Argument of 'glmnet'. Separate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables that are not in 1:fix. |
alpha |
Argument of 'glmnet'. The elasticnet mixing parameter, with 0≤ α ≤ 1. |
showit |
Logical value. If TRUE, shows the iterations of the algorithm. Default is FALSE. |
This function performs fixed effects selection in linear mixed models through a L1-penalization of the log-likelihood of the marginal model. The method optimizes a criterion via a multicycle ECM algorithm at the regularization parameter mu.
Two algorithms are available: one when the random effects are assumed to be independent (D=TRUE) and one when they are not (D=FALSE).
Selection on the random is only performed when D=TRUE.
A 'lassop' object is returned.
data |
List of the user-data: the scaled matrix used in the algorithm, the first column being (1,...,1); Y and Z, which is the design matrix of the random effects. |
beta |
Estimation of the fixed effects. |
fitted.values |
Fitted values calculated with the fixed effects and the random effects. |
Psi |
Variance of the random effects. Matrix of dimension q*q. |
sigma_e |
Variance of the residuals. |
it |
Number of iterations of the algorithm. |
converge |
Logical. TRUE if the algorithm has converged, FALSE otherwise. |
u |
Vector of the concatenation of the estimated random effects (u_1',...,u_q')'. |
call |
The call that produced this object. |
mu |
The penalty used in the algorithm. |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ## Not run:
N <- 20 # number of groups
p <- 80 # number of covariates (including intercept)
q <- 2 # number of random effect covariates
ni <- rep(6,N) # observations per group
n <- sum(ni) # total number of observations
grp <- factor(rep(1:N,ni)) # grouping variable
grp=rbind(grp,grp)
beta <- c(1,2,4,3,rep(0,p-3)) # fixed-effects coefficients
x <- cbind(1,matrix(rnorm(n*p),nrow=n)) # design matrix
u1=rnorm(N,0,sd=sqrt(2))
u2=rnorm(N,0,sd=sqrt(2))
bi1 <- rep(u1,ni)
bi2 <- rep(u2,ni)
bi <- rbind(bi1,bi2)
z=x[,1:2,drop=FALSE]
epsilon=rnorm(120)
y <- numeric(n)
for (k in 1:n) y[k] <- x[k,]%*%beta + t(z[k,])%*%bi[,k] + epsilon[k]
########
#independent random effects
fit=lassop(x,y,z,grp,D=1,mu=0.2,fix=1,rand=c(1,2))
#dependent random effects
fit=lassop(x,y,z,grp,mu=0.2,fix=1,rand=c(1,2))
## End(Not run)
|
Loading required package: glmnet
Loading required package: Matrix
Loading required package: foreach
Loaded glmnet 2.0-16
Loading required package: mht
Loaded mht 3.1.2
Thanks for using me.
Don't hesitate to contact my maintainer if you have a request or you encountered problems/bugs.
Loaded MMS 3.0.11
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.