gglasso: Fits the regularization paths for group-lasso penalized...

View source: R/gglasso.R

gglassoR Documentation

Fits the regularization paths for group-lasso penalized learning problems

Description

Fits regularization paths for group-lasso penalized learning problems at a sequence of regularization parameters lambda.

Usage

gglasso(
  x,
  y,
  group = NULL,
  loss = c("ls", "logit", "sqsvm", "hsvm", "wls"),
  nlambda = 100,
  lambda.factor = ifelse(nobs < nvars, 0.05, 0.001),
  lambda = NULL,
  pf = sqrt(bs),
  weight = NULL,
  dfmax = as.integer(max(group)) + 1,
  pmax = min(dfmax * 1.2, as.integer(max(group))),
  eps = 1e-08,
  maxit = 3e+08,
  delta,
  intercept = TRUE
)

Arguments

x

matrix of predictors, of dimension n \times p; each row is an observation vector.

y

response variable. This argument should be quantitative for regression (least squares), and a two-level factor for classification (logistic model, huberized SVM, squared SVM).

group

a vector of consecutive integers describing the grouping of the coefficients (see example below).

loss

a character string specifying the loss function to use, valid options are:

  • "ls" least squares loss (regression),

  • "logit" logistic loss (classification).

  • "hsvm" Huberized squared hinge loss (classification),

  • "sqsvm" Squared hinge loss (classification),

Default is "ls".

nlambda

the number of lambda values - default is 100.

lambda.factor

the factor for getting the minimal lambda in lambda sequence, where min(lambda) = lambda.factor * max(lambda). max(lambda) is the smallest value of lambda for which all coefficients are zero. The default depends on the relationship between n (the number of rows in the matrix of predictors) and p (the number of predictors). If n >= p, the default is 0.001, close to zero. If n<p, the default is 0.05. A very small value of lambda.factor will lead to a saturated fit. It takes no effect if there is user-defined lambda sequence.

lambda

a user supplied lambda sequence. Typically, by leaving this option unspecified users can have the program compute its own lambda sequence based on nlambda and lambda.factor. Supplying a value of lambda overrides this. It is better to supply a decreasing sequence of lambda values than a single (small) value, if not, the program will sort user-defined lambda sequence in decreasing order automatically.

pf

penalty factor, a vector in length of bn (bn is the total number of groups). Separate penalty weights can be applied to each group of \betas to allow differential shrinkage. Can be 0 for some groups, which implies no shrinkage, and results in that group always being included in the model. Default value for each entry is the square-root of the corresponding size of each group.

weight

a nxn observation weight matrix in the where n is the number of observations. Only used if loss='wls' is specified. Note that cross-validation is NOT IMPLEMENTED for loss='wls'.

dfmax

limit the maximum number of groups in the model. Useful for very large bs (group size), if a partial path is desired. Default is bs+1.

pmax

limit the maximum number of groups ever to be nonzero. For example once a group enters the model, no matter how many times it exits or re-enters model through the path, it will be counted only once. Default is min(dfmax*1.2,bs).

eps

convergence termination tolerance. Defaults value is 1e-8.

maxit

maximum number of outer-loop iterations allowed at fixed lambda value. Default is 3e8. If models do not converge, consider increasing maxit.

delta

the parameter \delta in "hsvm" (Huberized squared hinge loss). Default is 1.

intercept

Whether to include intercept in the model. Default is TRUE.

Details

Note that the objective function for "ls" least squares is

RSS/(2*n) + lambda * penalty;

for "hsvm" Huberized squared hinge loss, "sqsvm" Squared hinge loss and "logit" logistic regression, the objective function is

-loglik/n + lambda * penalty.

Users can also tweak the penalty by choosing different penalty factor.

For computing speed reason, if models are not converging or running slow, consider increasing eps, decreasing nlambda, or increasing lambda.factor before increasing maxit.

Value

An object with S3 class gglasso.

call

the call that produced this object

b0

intercept sequence of length length(lambda)

beta

a p*length(lambda) matrix of coefficients.

df

the number of nonzero groups for each value of lambda.

dim

dimension of coefficient matrix (ices)

lambda

the actual sequence of lambda values used

npasses

total number of iterations (the most inner loop) summed over all lambda values

jerr

error flag, for warnings and errors, 0 if no error.

group

a vector of consecutive integers describing the grouping of the coefficients.

Author(s)

Yi Yang and Hui Zou
Maintainer: Yi Yang <yi.yang6@mcgill.ca>

References

Yang, Y. and Zou, H. (2015), “A Fast Unified Algorithm for Computing Group-Lasso Penalized Learning Problems,” Statistics and Computing. 25(6), 1129-1141.
BugReport: https://github.com/emeryyi/gglasso

See Also

plot.gglasso

Examples


# load gglasso library
library(gglasso)

# load bardet data set
data(bardet)

# define group index
group1 <- rep(1:20,each=5)

# fit group lasso penalized least squares
m1 <- gglasso(x=bardet$x,y=bardet$y,group=group1,loss="ls")

# load colon data set
data(colon)

# define group index
group2 <- rep(1:20,each=5)

# fit group lasso penalized logistic regression
m2 <- gglasso(x=colon$x,y=colon$y,group=group2,loss="logit")


gglasso documentation built on May 29, 2024, 2:38 a.m.