fit | R Documentation |
For a linear multiple output model with p
features (covariates) dived into m
groups using sparse group lasso.
fit(x, y, intercept = TRUE, weights = NULL, grouping = NULL,
groupWeights = NULL, parameterWeights = NULL, alpha = 1, lambda,
d = 100, algorithm.config = lsgl.standard.config)
x |
design matrix, matrix of size |
y |
response matrix, matrix of size |
intercept |
should the model include intercept parameters. |
weights |
sample weights, vector of size |
grouping |
grouping of features, a factor or vector of length |
groupWeights |
the group weights, a vector of length |
parameterWeights |
a matrix of size |
alpha |
the |
lambda |
lambda.min relative to lambda.max or the lambda sequence for the regularization path. |
d |
length of lambda sequence (ignored if |
algorithm.config |
the algorithm configuration to be used. |
This function computes a sequence of minimizers (one for each lambda given in the lambda
argument) of
\frac{1}{N}\|Y-X\beta\|_F^2 + \lambda \left( (1-\alpha) \sum_{J=1}^m \gamma_J \|\beta^{(J)}\|_2 + \alpha \sum_{i=1}^{n} \xi_i |\beta_i| \right)
where \|\cdot\|_F
is the frobenius norm.
The vector \beta^{(J)}
denotes the parameters associated with the J
'th group of features.
The group weights are denoted by \gamma \in [0,\infty)^m
and the parameter weights by \xi \in [0,\infty)^n
.
beta |
the fitted parameters – the list |
loss |
the values of the loss function. |
objective |
the values of the objective function (i.e. loss + penalty). |
lambda |
the lambda values used. |
Martin Vincent
set.seed(100) # This may be removed, ensures consistency of tests
# Simulate from Y = XB + E,
# the dimension of Y is N x K, X is N x p, B is p x K
N <- 50 # number of samples
p <- 50 # number of features
K <- 25 # number of groups
B <- matrix(
sample(c(rep(1,p*K*0.1), rep(0, p*K-as.integer(p*K*0.1)))),
nrow = p,ncol = K)
X <- matrix(rnorm(N*p,1,1), nrow=N, ncol=p)
Y <- X %*% B + matrix(rnorm(N*K,0,1), N, K)
fit <-lsgl::fit(X,Y, alpha=1, lambda = 0.1, intercept=FALSE)
## ||B - \beta||_F
sapply(fit$beta, function(beta) sum((B - beta)^2))
## Plot
par(mfrow = c(3,1))
image(B, main = "True B")
image(
x = as.matrix(fit$beta[[100]]),
main = paste("Lasso estimate (lambda =", round(fit$lambda[100], 2), ")")
)
image(solve(t(X)%*%X)%*%t(X)%*%Y, main = "Least squares estimate")
# The training error of the models
Err(fit, X, loss="OVE")
# This is simply the loss function
sqrt(N*fit$loss)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.