slrr | R Documentation |
Implements a low-rank regression approach from: "On The Equivalent of Low-Rank Regressions and Linear Discriminant Analysis Based Regressions" by Cai, Ding, and Huang (2013). This framework unifies:
Low-Rank Ridge Regression (LRRR): when penalty="ridge"
,
adds a Frobenius norm penalty \(\|\mathbfA B\|F^2\).
\item Sparse Low-Rank Regression (SLRR): when penalty="l21"
,
uses an \(\ell2,1\) norm for row-sparsity. An iterative reweighting
approach is performed to solve the non-smooth objective.
slrr(
X,
Y,
s,
lambda = 0.001,
penalty = c("ridge", "l21"),
max_iter = 50,
tol = 1e-06,
preproc = center(),
verbose = FALSE
)
X |
A numeric matrix \(\mathrmn \times d\). Rows = samples, columns = features. |
Y |
A factor or numeric vector of length \(\mathrmn\), representing class labels. If numeric, it will be converted to a factor. |
s |
The rank (subspace dimension) of the low-rank coefficient matrix \(\mathbfW\). Must be \(\le\) the number of classes - 1, typically. |
lambda |
A numeric penalty parameter (default 0.001). |
penalty |
Either |
max_iter |
Maximum number of iterations for the |
tol |
Convergence tolerance for the iterative reweighting loop if |
preproc |
A preprocessing function/object from multivarious, default |
verbose |
Logical, if |
In both cases, the model is equivalent to performing LDA-like dimensionality reduction
(finding a subspace \(\mathbfA\) of rank s
) and then doing a
regularized regression (\(\mathbfB\)) in that subspace. The final
regression matrix is \(\mathbfW = \mathbfA\,\mathbfB\), which has rank
at most s
.
1) Build Soft-Label Matrix:
We convert Y
to a factor, then create an indicator matrix \(\mathbfG\)
with nrow
= n
, ncol
= c
, normalizing each column to sum to 1
(akin to the "normalized training indicator" in the paper).
2) LDA-Like Subspace:
We compute total scatter \(\mathbfS_t\) and between-class scatter \(\mathbfS_b\),
then solve \(\mathbfM = \mathbfS_t^-1 \mathbfS_b\) for its top s
eigenvectors
\(\mathbfA\). This yields the rank-$s$ subspace.
3) Regression in Subspace: Let \(\mathbfX^\top \mathbfX + \lambda \mathbfD\) be the (regularized) covariance term to invert, where:
If penalty="ridge"
, \(\mathbfD = \mathbfI\).
If penalty="l21"
, we iterate a reweighted diagonal \(\mathbfD\)
to encourage row-sparsity (cf. the paper's Eq. (23-30)).
Then we solve \(\mathbfB =
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.