Fast generic solver for sparse group lasso optimization problems. The loss (objective) function must be defined in a C++ module. The optimization problem is solved using a coordinate gradient descent algorithm. Convergence of the algorithm is established (see reference) and the algorithm is applicable to a broad class of loss functions. Use of parallel computing for cross validation and subsampling is supported through the 'foreach' and 'doParallel' packages. Development version is on GitHub, please report package issues on GitHub.
Computes a sequence of minimizers (one for each lambda given in the lambda
argument) of
\mathrm{loss}(β) + λ ≤ft( (1-α) ∑_{J=1}^m γ_J \|β^{(J)}\|_2 + α ∑_{i=1}^{n} ξ_i |β_i| \right)
where \mathrm{loss} is the loss/objective function specified by module_name
.
The parameters are organized in the parameter matrix β with dimension q\times p.
The vector β^{(J)} denotes the J parameter group.
The group weights γ \in [0,∞)^m and the parameter weights ξ = (ξ^{(1)},…, ξ^{(m)}) \in [0,∞)^n
with ξ^{(1)}\in [0,∞)^{n_1},…, ξ^{(m)} \in [0,∞)^{n_m}.
The package includes generic functions for:
Fitting models using sparse group lasso, that is computing the minimizers of the above equation.
Cross validation using parallel computing.
Generic subsampling using parallel computing.
Applying the fitted models on new data and predicting responses.
Computing lambda sequences.
Navigating the models and computing error rates.
Martin Vincent
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.