ernet: Regularization paths for the sparse asymmetric least squares...

View source: R/ernet.R

ernetR Documentation

Regularization paths for the sparse asymmetric least squares (SALES) regression (or the sparse expectile regression)

Description

Fits regularization paths for the Lasso or elastic net penalized asymmetric least squares regression at a sequence of regularization parameters.

Usage

ernet(
  x,
  y,
  nlambda = 100L,
  method = "er",
  lambda.factor = ifelse(nobs < nvars, 0.01, 1e-04),
  lambda = NULL,
  lambda2 = 0,
  pf = rep(1, nvars),
  pf2 = rep(1, nvars),
  exclude,
  dfmax = nvars + 1,
  pmax = min(dfmax * 1.2, nvars),
  standardize = TRUE,
  intercept = TRUE,
  eps = 1e-08,
  maxit = 1000000L,
  tau = 0.5
)

Arguments

x

matrix of predictors, of dimension (nobs * nvars); each row is an observation.

y

response variable.

nlambda

the number of lambda values (default is 100).

method

a character string specifying the loss function to use. only er is available now.

lambda.factor

The factor for getting the minimal lambda in the lambda sequence, where we set min(lambda) = lambda.factor * max(lambda) with max(lambda) being the smallest value of lambda that penalizes all coefficients to zero. The default depends on the relationship between N (the number of rows in the matrix of predictors) and p (the number of predictors). If N < p, the default is 0.01. If N > p, the default is 0.0001, closer to zero. A very small value of lambda.factor will lead to a saturated fit. It takes no effect if there is a user-supplied lambda sequence.

lambda

a user-supplied lambda sequence. Typically, by leaving this option unspecified users can have the program compute its own lambda sequence based on nlambda and lambda.factor. It is better to supply, if necessary, a decreasing sequence of lambda values than a single (small) value. The program will ensure that the user-supplied lambda sequence is sorted in decreasing order before fitting the model.

lambda2

regularization parameter lambda2 for the quadratic penalty of the coefficients.

pf

L1 penalty factor of length p used for the adaptive LASSO or adaptive elastic net. Separate L1 penalty weights can be applied to each coefficient to allow different L1 shrinkage. Can be 0 for some variables, which imposes no shrinkage, and results in that variable always be included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude).

pf2

L2 penalty factor of length p used for adaptive elastic net. Separate L2 penalty weights can be applied to each coefficient to allow different L2 shrinkage. Can be 0 for some variables, which imposes no shrinkage. Default is 1 for all variables.

exclude

indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor.

dfmax

the maximum number of variables allowed in the model. Useful for very large p when a partial path is desired. Default is p+1.

pmax

the maximum number of coefficients allowed ever to be nonzero. For example once β enters the model, no matter how many times it exits or re-enters the model through the path, it will be counted only once. Default is min(dfmax*1.2, p).

standardize

logical flag for variable standardization, prior to fitting the model sequence. The coefficients are always returned to the original scale. Default is TRUE.

intercept

Should intercept(s) be fitted (default is TRUE) or set to zero (FALSE)?

eps

convergence threshold for coordinate descent. Each inner coordinate descent loop continues until the maximum change in any coefficient is less than eps. Defaults value is 1e-8.

maxit

maximum number of outer-loop iterations allowed at fixed lambda values. Default is 1e7. If the algorithm does not converge, consider increasing maxit.

tau

the parameter τ in the ALS regression model. The value must be in (0,1). Default is 0.5.

Details

Note that the objective function in ernet is

1'Ψτ(y-Xβ)/N + λ1*|β| + 0.5*λ2*||β||^2,

where Ψτ denotes the asymmetric squared error loss and the penalty is a combination of weighted L1 and L2 terms.

For faster computation, if the algorithm is not converging or running slow, consider increasing eps, decreasing nlambda, or increasing lambda.factor before increasing maxit.

Value

An object with S3 class ernet.

call

the call that produced this object

b0

intercept sequence of length length(lambda)

beta

a p*length(lambda) matrix of coefficients, stored as a sparse matrix (dgCMatrix class, the standard class for sparse numeric matrices in the Matrix package.). To convert it into normal type matrix use as.matrix().

lambda

the actual sequence of lambda values used

df

the number of nonzero coefficients for each value of lambda.

dim

dimension of coefficient matrix

npasses

total number of iterations summed over all lambda values

jerr

error flag, for warnings and errors, 0 if no error.

Author(s)

Yuwen Gu and Hui Zou

Maintainer: Yuwen Gu <yuwen.gu@uconn.edu>

References

Gu, Y., and Zou, H. (2016). "High-dimensional generalizations of asymmetric least squares regression and their applications." The Annals of Statistics, 44(6), 2661–2694.

See Also

plot.ernet, coef.ernet, predict.ernet, print.ernet

Examples


set.seed(1)
n <- 100
p <- 400
x <- matrix(rnorm(n * p), n, p)
y <- rnorm(n)
tau <- 0.90
pf <- abs(rnorm(p))
pf2 <- abs(rnorm(p))
lambda2 <- 1
m1 <- ernet(y = y, x = x, tau = tau, eps = 1e-8, pf = pf,
            pf2 = pf2, standardize = FALSE, intercept = FALSE,
            lambda2 = lambda2)


SALES documentation built on Aug. 16, 2022, 1:05 a.m.

Related to ernet in SALES...