Description Usage Arguments Details Value Author(s) References See Also Examples
Fits regularization paths for grouplasso penalized learning problems at a sequence of regularization parameters lambda.
1 2 3 4 5  gglasso(x, y, group = NULL, loss = c("ls", "logit", "sqsvm", "hsvm"),
nlambda = 100, lambda.factor = ifelse(nobs < nvars, 0.05, 0.001),
lambda = NULL, pf = sqrt(bs), dfmax = as.integer(max(group)) + 1,
pmax = min(dfmax * 1.2, as.integer(max(group))), eps = 1e08,
maxit = 3e+08, delta, intercept = TRUE)

x 
matrix of predictors, of dimension n*p; each row is an observation vector. 
y 
response variable. This argument should be quantitative for regression (least squares), and a twolevel factor for classification (logistic model, huberized SVM, squared SVM). 
group 
a vector of consecutive integers describing the grouping of the coefficients (see example below). 
loss 
a character string specifying the loss function to use, valid options are:
Default is 
nlambda 
the number of 
lambda.factor 
the factor for getting the minimal lambda in

lambda 
a user supplied 
pf 
penalty factor, a vector in length of bn (bn is the total number of groups). Separate penalty weights can be applied to each group of beta'ss to allow differential shrinkage. Can be 0 for some groups, which implies no shrinkage, and results in that group always being included in the model. Default value for each entry is the squareroot of the corresponding size of each group. 
dfmax 
limit the maximum number of groups in the model. Useful for
very large 
pmax 
limit the maximum number of groups ever to be nonzero. For
example once a group enters the model, no matter how many times it exits or
reenters model through the path, it will be counted only once. Default is

eps 
convergence termination tolerance. Defaults value is 
maxit 
maximum number of outerloop iterations allowed at fixed lambda
value. Default is 3e8. If models do not converge, consider increasing

delta 
the parameter delta in 
intercept 
Whether to include intercept in the model. Default is TRUE. 
Note that the objective function for "ls"
least squares is
RSS/(2*n) + lambda * penalty;
for "hsvm"
Huberized squared
hinge loss, "sqsvm"
Squared hinge loss and "logit"
logistic
regression, the objective function is
loglik/n + lambda * penalty.
Users can also tweak the penalty by choosing different penalty factor.
For computing speed reason, if models are not converging or running slow,
consider increasing eps
, decreasing nlambda
, or increasing
lambda.factor
before increasing maxit
.
An object with S3 class gglasso
.
call 
the call that produced this object 
b0 
intercept sequence of length

beta 
a 
df 
the number of nonzero groups for each value of

dim 
dimension of coefficient matrix (ices) 
lambda 
the actual sequence of 
npasses 
total number of iterations (the most inner loop) summed over all lambda values 
jerr 
error flag, for warnings and errors, 0 if no error. 
group 
a vector of consecutive integers describing the grouping of the coefficients. 
Yi Yang and Hui Zou
Maintainer: Yi Yang <[email protected]>
Yang, Y. and Zou, H. (2015), “A Fast Unified Algorithm for
Computing GroupLasso Penalized Learning Problems,” Statistics and
Computing. 25(6), 11291141.
BugReport:
https://github.com/emeryyi/gglasso
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  # load gglasso library
library(gglasso)
# load bardet data set
data(bardet)
# define group index
group1 < rep(1:20,each=5)
# fit group lasso penalized least squares
m1 < gglasso(x=bardet$x,y=bardet$y,group=group1,loss="ls")
# load colon data set
data(colon)
# define group index
group2 < rep(1:20,each=5)
# fit group lasso penalized logistic regression
m2 < gglasso(x=colon$x,y=colon$y,group=group2,loss="logit")

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.