Description Usage Arguments Details Value Author(s) References Examples
Efficient design matrix free procedure for fitting large scale penalized 2 or 3-dimensional generalized linear array models (GLAM). It is also possible to fit an additional non-tensor structured component - e.g an intercept - however this can reduce the computational efficiency of the procedure substantially. Currently the LASSO penalty and the SCAD penalty are both implemented. Furthermore, the Gaussian model with identity link, the Binomial model with logit link, the Poisson model with log link and the Gamma model with log link is currently implemented. The underlying algorithm combines gradient descent and proximal gradient (gdpg algorithm), see Lund et al., 2017.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | glamlasso(X,
Y,
Z = NULL,
family = "gaussian",
penalty = "lasso",
intercept = FALSE,
weights = NULL,
betainit = NULL,
alphainit = NULL,
nlambda = 100,
lambdaminratio = 1e-04,
lambda = NULL,
penaltyfactor = NULL,
penaltyfactoralpha = NULL,
reltolinner = 1e-07,
reltolouter = 1e-04,
maxiter = 15000,
steps = 1,
maxiterinner = 3000,
maxiterouter = 25,
btinnermax = 100,
btoutermax = 100,
iwls = "exact",
nu = 1)
|
X |
A list containing the tensor components (2 or 3) of the tensor design matrix. These are matrices of sizes n_i \times p_i. |
Y |
The response values, an array of size n_1 \times\cdots\times n_d. For option
|
Z |
The non tensor structrured part of the design matrix. A matrix of size n_1 \cdots n_d\times q.
Is set to |
family |
A string specifying the model family (essentially the response distribution). Possible values
are |
penalty |
A string specifying the penalty. Possible values
are |
intercept |
Logical variable indicating if the model includes an intercept.
When |
weights |
Observation weights, an array of size n_1 \times \cdots \times n_d. For option
|
betainit |
The initial parameter values. Default is NULL in which case all parameters are initialized at zero. |
alphainit |
A q\times 1 vector containing the initial parameter values for the non-tensor parameter. Default is NULL in which case all parameters are initialized at 0. |
nlambda |
The number of |
lambdaminratio |
The smallest value for |
lambda |
The sequence of penalty parameters for the regularization path. |
penaltyfactor |
An array of size p_1 \times \cdots \times p_d. Is multiplied
with each element in |
penaltyfactoralpha |
A q \times 1 vector multiplied with each element in |
reltolinner |
The convergence tolerance for the inner loop |
reltolouter |
The convergence tolerance for the outer loop. |
maxiter |
The maximum number of inner iterations allowed for each |
steps |
The number of steps used in the multi-step adaptive lasso algorithm for non-convex penalties. Automatically set to 1 when |
maxiterinner |
The maximum number of inner iterations allowed for each outer iteration. |
maxiterouter |
The maximum number of outer iterations allowed for each lambda. |
btinnermax |
Maximum number of backtracking steps allowed in each inner iteration. Default is |
btoutermax |
Maximum number of backtracking steps allowed in each outer iteration. Default is |
iwls |
A string indicating whether to use the exact iwls weight matrix or use a kronecker structured approximation to it. |
nu |
A number between 0 and 1 that controls the step size δ in the proximal algorithm (inner loop) by
scaling the upper bound \hat{L}_h on the Lipschitz constant L_h (see Lund et al., 2017).
For |
Consider a (two component) generalized linear model (GLM)
g(μ) = Xβ + Zα =: η.
Here g is a link function, μ is a n\times 1 vector containing the mean of the response variable Y, Z is a n\times q matrix and X a n\times p matrix with tensor structure
X = X_d\otimes\cdots\otimes X_1,
where X_1,…,X_d are the marginal n_i\times p_i design matrices (tensor factors) such that p = p_1\cdots p_d and n=n_1\cdots n_d. Then β is the p\times 1 parameter associated with the tensor component X and α the q\times 1 parameter associated with the non-tensor component Z, e.g. the intercept.
Using the generalized linear array model (GLAM) framework the model equation is
g(μ) = \textrm{vec}(ρ(X_d,ρ(X_{d-1},…,ρ(X_1,B)))) + Zα ,
where ρ is the so called rotated H-transform and B is the array version of β. See Currie et al., 2006 for more details.
The log-likelihood is a function of θ :=(β,α) through the linear predictor η i.e. θ \mapsto l(η(θ)). In the usual exponential family framework this can be expressed as
l(η(θ )) = ∑_{i = 1}^n a_i \frac{y_i \vartheta(η_i(θ)) - b(\vartheta(η_i(θ )))}{ψ}+c(y_i,ψ)
where \vartheta, the canonical parameter map, is linked to the linear predictor via the identity η(θ) = g(b'(\vartheta)) with b the cumulant function. Here a_i ≥ 0, i = 1,…,n are observation weights and ψ is the dispersion parameter.
For d = 3 or d = 2, using only the marginal matrices X_1,X_2,…,
the function glamlasso
solves the penalized estimation problem
\min_{θ} -l(η(θ)) + λ J (θ),
for J either the LASSO or SCAD penalty function, in the GLAM setup for a sequence of penalty parameters λ>0. The underlying algorithm is based on an outer gradient descent loop and an inner proximal gradient based loop. We note that if J is not convex, as with the SCAD penalty, we use the multiple step adaptive lasso procedure to loop over the inner proximal algorithm, see Lund et al., 2017 for more details.
Note that the package is optimized towards solving the estimation problem, for α = 0. For α \neq 0 the user incurs a potentially substantial computational cost. Especially it is not advisable to inlcude a very large non-tensor component in the model (large q) and even adding an intecept to the model (q=1) will result in a reduction of computational efficiency.
An object with S3 Class 'glamlasso'.
spec |
A string indicating the GLAM dimension (d = 2, 3), the model family and the penalty. |
beta |
A p_1\cdots p_d \times |
alpha |
A q \times |
lambda |
A vector containing the sequence of penalty values used in the estimation procedure. |
df |
The number of nonzero coefficients for each value of |
dimcoef |
A vector giving the dimension of the model coefficient array β. |
dimobs |
A vector giving the dimension of the observation (response)
array |
Iter |
A list with 4 items:
|
Adam Lund
Maintainer: Adam Lund, adam.lund@math.ku.dk
Lund, A., M. Vincent, and N. R. Hansen (2017). Penalized estimation in large-scale generalized linear array models. Journal of Computational and Graphical Statistics, 26, 3, 709-724. url = https://doi.org/10.1080/10618600.2017.1279548.
Currie, I. D., M. Durban, and P. H. C. Eilers (2006). Generalized linear array models with applications to multidimensional smoothing. Journal of the Royal Statistical Society. Series B. 68, 259-280. url = http://dx.doi.org/10.1111/j.1467-9868.2006.00543.x.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ##size of example
n1 <- 65; n2 <- 26; n3 <- 13; p1 <- 12; p2 <- 6; p3 <- 4
##marginal design matrices (tensor components)
X1 <- matrix(rnorm(n1 * p1), n1, p1)
X2 <- matrix(rnorm(n2 * p2), n2, p2)
X3 <- matrix(rnorm(n3 * p3), n3, p3)
X <- list(X1, X2, X3)
##############gaussian example
Beta <- array(rnorm(p1 * p2 * p3) * rbinom(p1 * p2 * p3, 1, 0.1), c(p1 , p2, p3))
Mu <- RH(X3, RH(X2, RH(X1, Beta)))
Y <- array(rnorm(n1 * n2 * n3, Mu), c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y))
modelno <- length(fit$lambda)
plot(c(Beta), type = "h", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red", type = "h")
###with non tensor design component Z
q <- 5
alpha <- matrix(rnorm(q)) * rbinom(q, 1, 0.5)
Z <- matrix(rnorm(n1 * n2 * n3 * q), n1 * n2 *n3, q)
Y <- array(rnorm(n1 * n2 * n3, Mu + array(Z %*% alpha, c(n1, n2, n3))), c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, Z))
modelno <- length(fit$lambda)
oldmfrow <- par()$mfrow
par(mfrow = c(1, 2))
plot(c(Beta), type = "l", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red")
plot(c(alpha), type = "h", ylim = range(Beta, fit$alpha[, modelno]))
points(c(alpha))
lines(fit$alpha[ , modelno], col = "red", type = "h")
par(mfrow = oldmfrow)
################ poisson example
Beta <- array(rnorm(p1 * p2 * p3, 0, 0.1) * rbinom(p1 * p2 * p3, 1, 0.1), c(p1 , p2, p3))
Mu <- RH(X3, RH(X2, RH(X1, Beta)))
Y <- array(rpois(n1 * n2 * n3, exp(Mu)), dim = c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, family = "poisson", nu = 0.1))
modelno <- length(fit$lambda)
plot(c(Beta), type = "h", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red", type = "h")
###with non tensor design component Z
q <- 5
alpha <- matrix(rnorm(q)) * rbinom(q, 1, 0.5)
Z <- matrix(rnorm(n1 * n2 * n3 * q), n1 * n2 *n3, q)
Y <- array(rpois(n1 * n2 * n3, exp(Mu + array(Z %*% alpha, c(n1, n2, n3)))), dim = c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, Z, family = "poisson", nu = 0.1))
modelno <- length(fit$lambda)
oldmfrow <- par()$mfrow
par(mfrow = c(1, 2))
plot(c(Beta), type = "l", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red")
plot(c(alpha), type = "h", ylim = range(Beta, fit$alpha[, modelno]))
points(c(alpha))
lines(fit$alpha[ , modelno], col = "red", type = "h")
par(mfrow = oldmfrow)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.