L0Learn-package: A package for L0-regularized learning

L0Learn-packageR Documentation

A package for L0-regularized learning

Description

L0Learn fits regularization paths for L0-regularized regression and classification problems. Specifically, it can solve either one of the following problems over a grid of λ and γ values:

\min_{β_0, β} ∑_{i=1}^{n} \ell(y_i, β_0+ \langle x_i, β \rangle) + λ ||β||_0 \quad \quad (L0)

\min_{β_0, β} ∑_{i=1}^{n} \ell(y_i, β_0+ \langle x_i, β \rangle) + λ ||β||_0 + γ||β||_1 \quad (L0L1)

\min_{β_0, β} ∑_{i=1}^{n} \ell(y_i, β_0+ \langle x_i, β \rangle) + λ ||β||_0 + γ||β||_2^2 \quad (L0L2)

where \ell is the loss function. We currently support regression using squared error loss and classification using either logistic loss or squared hinge loss. Pathwise optimization can be done using either cyclic coordinate descent (CD) or local combinatorial search. The core of the toolkit is implemented in C++ and employs many computational tricks and heuristics, leading to competitive running times. CD runs very fast and typically leads to relatively good solutions. Local combinatorial search can find higher-quality solutions (at the expense of increased running times). The toolkit has the following six main methods:

  • L0Learn.fit: Fits an L0-regularized model.

  • L0Learn.cvfit: Performs k-fold cross-validation.

  • print: Prints a summary of the path.

  • coef: Extracts solutions(s) from the path.

  • predict: Predicts response using a solution in the path.

  • plot: Plots the regularization path or cross-validation error.

References

Hazimeh and Mazumder. Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms. Operations Research (2020). https://pubsonline.informs.org/doi/10.1287/opre.2019.1919.


L0Learn documentation built on March 7, 2023, 8:18 p.m.