cpernet: Regularization paths for the coupled sparse asymmetric least...

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/cpernet.R

Description

Fits regularization paths for coupled sparse asymmetric least squares regression at a sequence of regularization parameters.

Usage

1
2
3
4
5
6
7
8
cpernet(x, y, w = 1.0, nlambda = 100L, method = "cper", 
        lambda.factor = ifelse(2 * nobs < nvars, 1e-02, 1e-04), 
        lambda = NULL, lambda2 = 0, pf.mean = rep(1, nvars), 
        pf2.mean = rep(1, nvars), pf.scale = rep(1, nvars),
        pf2.scale = rep(1, nvars), exclude, dfmax = nvars + 1, 
        pmax = min(dfmax * 1.2, nvars), standardize = TRUE, 
        intercept = TRUE, eps = 1e-08, maxit = 1000000L, 
        tau = 0.80)

Arguments

x

matrix of predictors, of dimension (nobs * nvars); each row is an observation.

y

response variable.

w

weight applied to the asymmetric squared error loss of the mean part. See details. Default is 1.0.

nlambda

the number of lambda values (default is 100).

method

a character string specifying the loss function to use. Only cper is available now.

lambda.factor

The factor for getting the minimal lambda in the lambda sequence, where min(lambda) = lambda.factor * max(lambda) with max(lambda) being the smallest value of lambda for which all coefficients are zero. The default value depends on the relationship between N (the number of observations) and p (the number of predictors). If N < p, the default is 0.01. If N > p, the default is 0.0001, closer to zero. A very small value of lambda.factor will lead to a saturated fit. The argument takes no effect if there is a user-supplied lambda sequence.

lambda

a user-supplied lambda sequence. Typically, by leaving this option unspecified users can have the program compute its own lambda sequence based on nlambda and lambda.factor. It is better to supply, if necessary, a decreasing sequence of lambda values than a single (small) value. The program will ensure that the user-supplied lambda sequence is sorted in decreasing order.

lambda2

regularization parameter lambda2 for the quadratic penalty of the coefficients. Default is 0, meaning no L2 penalization.

pf.mean, pf.scale

L1 penalty factor of length p used for adaptive LASSO or adaptive elastic net. Separate L1 penalty weights can be applied to each mean or scale coefficient to allow different L1 shrinkage. Can be 0 for some variables, which imposes no shrinkage and results in that variable being always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude).

pf2.mean, pf2.scale

L2 penalty factor of length p used for adaptive elastic net. Separate L2 penalty weights can be applied to each mean or scale coefficient to allow different L2 shrinkage. Can be 0 for some variables, which imposes no shrinkage. Default is 1 for all variables.

exclude

indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor.

dfmax

limit the maximum number of variables in the model. Useful for very large p, if a partial path is desired. Default is p+1.

pmax

limit the maximum number of variables ever to be nonzero. For example once β enters the model, no matter how many times it exits or re-enters the model through the path, it will be counted only once. Default is min(dfmax*1.2, p).

standardize

logical flag for variable standardization, prior to fitting the model sequence. The coefficients are always returned to the original scale. Default is TRUE.

intercept

Should intercept(s) be fitted (default=TRUE) or set to zero (FALSE).

eps

convergence threshold for coordinate descent. Each inner coordinate descent loop continues until the maximum change in any coefficient is less than eps. Defaults value is 1e-8.

maxit

maximum number of outer-loop iterations allowed at fixed lambda values. Default is 1e7. If the algorithm does not converge, consider increasing maxit.

tau

the parameter tau in the coupled ALS regression model. The value must be in (0,1) and cannot be 0.5. Default is 0.8.

Details

Note that the objective function in cpernet is

w*Ψ(y-Xβ,0.5)/N + Ψ(y-Xβ-Xθ,τ)/N + λ1*|β| + 0.5*λ2*||β||^2 + μ1*|θ| + 0.5*μ2*||θ||^2,

where Ψ(u,τ)=|τ-I(u<0)|*u^2 denotes the asymmetric squared error loss and the penalty is a combination of L1 and L2 terms for both the mean and scale coefficients.

For faster computation, if the algorithm is not converging or running slow, consider increasing eps, decreasing nlambda, or increasing lambda.factor before increasing maxit.

Value

An object with S3 class cpernet.

call

the call that produced this object.

b0, t0

intercept sequences both of length length(lambda) for the mean and scale respectively.

beta, theta

p*length(lambda) matrices of coefficients for the mean and scale respectively, stored as sparse matrices (dgCMatrix class, the standard class for sparse numeric matrices in the Matrix package). To convert them into normal R matrices, use as.matrix().

lambda

the actual sequence of lambda values used

df.beta, df.theta

the number of nonzero mean and scale coefficients respectively for each value of lambda.

dim

dimensions of coefficient matrices.

npasses

total number of iterations summed over all lambda values.

jerr

error flag, for warnings and errors, 0 if no error.

Author(s)

Yuwen Gu and Hui Zou
Maintainer: Yuwen Gu <guxxx192@umn.edu>

References

Gu, Y. and Zou, H. (Preprint), "High-dimensional Generalizations of Asymmetric Least Squares Regression and Their Applications". Annals of Statistics.

See Also

plot.cpernet

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
set.seed(1)
n <- 100
p <- 400
x <- matrix(rnorm(n*p), n, p)
y <- rnorm(n)
tau <- 0.30
pf <- abs(rnorm(p))
pf2 <- abs(rnorm(p))
w <- 2.0
lambda2 <- 1
m2 <- cpernet(y = y, x = x, w = w, tau = tau, eps = 1e-8, 
              pf.mean = pf, pf.scale = pf2,
              standardize = FALSE, lambda2 = lambda2)

Example output

Loading required package: Matrix

SALES documentation built on May 2, 2019, 5:08 a.m.

Related to cpernet in SALES...