sglOptim: sglOptim: Generic Sparse Group Lasso Solver

Description Details Author(s)


Fast generic solver for sparse group lasso optimization problems. The loss (objective) function must be defined in a C++ module. The optimization problem is solved using a coordinate gradient descent algorithm. Convergence of the algorithm is established (see reference) and the algorithm is applicable to a broad class of loss functions. Use of parallel computing for cross validation and subsampling is supported through the 'foreach' and 'doParallel' packages. Development version is on GitHub, please report package issues on GitHub.


Computes a sequence of minimizers (one for each lambda given in the lambda argument) of

\mathrm{loss}(β) + λ ≤ft( (1-α) ∑_{J=1}^m γ_J \|β^{(J)}\|_2 + α ∑_{i=1}^{n} ξ_i |β_i| \right)

where \mathrm{loss} is the loss/objective function specified by module_name. The parameters are organized in the parameter matrix β with dimension q\times p. The vector β^{(J)} denotes the J parameter group. The group weights γ \in [0,∞)^m and the parameter weights ξ = (ξ^{(1)},…, ξ^{(m)}) \in [0,∞)^n with ξ^{(1)}\in [0,∞)^{n_1},…, ξ^{(m)} \in [0,∞)^{n_m}.

The package includes generic functions for:


Martin Vincent

sglOptim documentation built on Oct. 21, 2018, 9:04 a.m.