hqreg: Fit a robust regression model with Huber or quantile loss...

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

Fit solution paths for Huber loss regression or quantile regression penalized by lasso or elastic-net over a grid of values for the regularization parameter lambda.

Usage

1
2
3
4
5
hqreg(X, y, method = c("huber", "quantile", "ls"),
    gamma = IQR(y)/10, tau = 0.5, alpha = 1, nlambda = 100, lambda.min = 0.05, lambda, 
    preprocess = c("standardize", "rescale"), screen = c("ASR", "SR", "none"), 
    max.iter = 10000, eps = 1e-7, dfmax = ncol(X)+1, penalty.factor = rep(1, ncol(X)), 
    message = FALSE)

Arguments

X

Input matrix.

y

Response vector.

method

The loss function to be used in the model. Either "huber" (default), "quantile", or "ls" for least squares (see Details).

gamma

The tuning parameter of Huber loss, with no effect for the other loss functions. Huber loss is quadratic for absolute values less than gamma and linear for those greater than gamma. The default value is IQR(y)/10.

tau

The tuning parameter of the quantile loss, with no effect for the other loss functions. It represents the conditional quantile of the response to be estimated, so must be a number between 0 and 1. It includes the absolute loss when tau = 0.5 (default).

alpha

The elastic-net mixing parameter that controls the relative contribution from the lasso and the ridge penalty. It must be a number between 0 and 1. alpha=1 is the lasso penalty and alpha=0 the ridge penalty.

nlambda

The number of lambda values. Default is 100.

lambda.min

The smallest value for lambda, as a fraction of lambda.max, the data derived entry value. Default is 0.05.

lambda

A user-specified sequence of lambda values. Typical usage is to leave blank and have the program automatically compute a lambda sequence based on nlambda and lambda.min. Specifying lambda overrides this. This argument should be used with care and supplied with a decreasing sequence instead of a single value. To get coefficients for a single lambda, use coef or predict instead after fitting the solution path with hqreg or performing k-fold CV with cv.hqreg.

preprocess

Preprocessing technique to be applied to the input. Either "standardize" (default) or "rescale"(see Details). The coefficients are always returned on the original scale.

screen

Screening rule to be applied at each lambda that discards variables for speed. Either "ASR" (default), "SR" or "none". "SR" stands for the strong rule, and "ASR" for the adaptive strong rule. Using "ASR" typically requires fewer iterations to converge than "SR", but the computing time are generally close. Note that the option "none" is used mainly for debugging, which may lead to much longer computing time.

max.iter

Maximum number of iterations. Default is 10000.

eps

Convergence threshold. The algorithms continue until the maximum change in the objective after any coefficient update is less than eps times the null deviance. Default is 1E-7.

dfmax

Upper bound for the number of nonzero coefficients. The algorithm exits and returns a partial path if dfmax is reached. Useful for very large dimensions.

penalty.factor

A numeric vector of length equal to the number of variables. Each component multiplies lambda to allow differential penalization. Can be 0 for some variables, in which case the variable is always in the model without penalization. Default is 1 for all variables.

message

If set to TRUE, hqreg will inform the user of its progress. This argument is kept for debugging. Default is FALSE.

Details

The sequence of models indexed by the regularization parameter lambda is fit using a semismooth Newton coordinate descent algorithm. The objective function is defined to be

∑ loss_i /n + λ*penalty.

For method = "huber",

loss(t) = t^2/(2*γ) I(|t|≤ γ) + (|t| - γ/2) I(|t|>γ);

for method = "quantile",

loss(t) = t (τ - I(t<0));

for method = "ls",

loss(t) = t^2/2.

In the model, "t" is replaced by residuals.

The program supports different types of preprocessing techniques. They are applied to each column of the input matrix X. Let x be a column of X. For preprocess = "standardize", the formula is

x' = (x-mean(x))/sd(x);

for preprocess = "rescale",

x' = (x-min(x))/(max(x)-min(x)).

The models are fit with preprocessed input, then the coefficients are transformed back to the original scale via some algebra. To fit a model for raw data with no preprocessing, use hqreg_raw.

Value

The function returns an object of S3 class "hqreg", which is a list containing:

call

The call that produced this object.

beta

The fitted matrix of coefficients. The number of rows is equal to the number of coefficients, and the number of columns is equal to nlambda. An intercept is included.

iter

A vector of length nlambda containing the number of iterations until convergence at each value of lambda.

saturated

A logical flag for whether the number of nonzero coefficients has reached dfmax.

lambda

The sequence of regularization parameter values in the path.

alpha

Same as above.

gamma

Same as above. NULL except when method = "huber".

tau

Same as above. NULL except when method = "quantile".

penalty.factor

Same as above.

method

Same as above.

nv

The variable screening rules are accompanied with checks of optimality conditions. When violations occur, the program adds in violating variables and re-runs the inner loop until convergence. nv is the number of violations.

Author(s)

Congrui Yi <congrui-yi@uiowa.edu>

References

Yi, C. and Huang, J. (2016) Semismooth Newton Coordinate Descent Algorithm for Elastic-Net Penalized Huber Loss Regression and Quantile Regression, https://arxiv.org/abs/1509.02957
Journal of Computational and Graphical Statistics, accepted in Nov 2016
http://www.tandfonline.com/doi/full/10.1080/10618600.2016.1256816

See Also

plot.hqreg, cv.hqreg

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
X = matrix(rnorm(1000*100), 1000, 100)
beta = rnorm(10)
eps = 4*rnorm(1000)
y = drop(X[,1:10] %*% beta + eps)

# Huber loss
fit1 = hqreg(X, y)
coef(fit1, 0.01)
predict(fit1, X[1:5,], lambda = c(0.02, 0.01))

# Quantile loss
fit2 = hqreg(X, y, method = "quantile", tau = 0.2)
plot(fit2)

# Squared loss
fit3 = hqreg(X, y, method = "ls", preprocess = "rescale")
plot(fit3, xvar = "norm")

Example output

 (Intercept)           V1           V2           V3           V4           V5 
 0.165140226  0.000000000 -0.743090122  0.866127910 -1.526896643  0.374611978 
          V6           V7           V8           V9          V10          V11 
 0.934545669 -0.344384494  0.273992767 -0.848428943  0.507356257 -0.006080970 
         V12          V13          V14          V15          V16          V17 
-0.175808816  0.005125498  0.073456960 -0.170515185  0.118757353  0.055228813 
         V18          V19          V20          V21          V22          V23 
-0.016826427  0.144144750  0.000000000  0.009145626  0.338968412 -0.056085077 
         V24          V25          V26          V27          V28          V29 
-0.167860389  0.070237166  0.000000000 -0.018596932  0.005221575  0.000000000 
         V30          V31          V32          V33          V34          V35 
-0.020294279  0.181611473  0.088995426 -0.222467249 -0.084094793  0.000000000 
         V36          V37          V38          V39          V40          V41 
 0.000000000 -0.075315257  0.108248295 -0.174764818  0.169891813  0.000000000 
         V42          V43          V44          V45          V46          V47 
 0.000000000  0.019599757 -0.107192495  0.000000000  0.000000000 -0.101026680 
         V48          V49          V50          V51          V52          V53 
 0.000000000  0.000000000 -0.050658379  0.000000000 -0.074273726 -0.060386727 
         V54          V55          V56          V57          V58          V59 
-0.087066662 -0.215026943  0.000000000 -0.014825987  0.000000000  0.000000000 
         V60          V61          V62          V63          V64          V65 
 0.155956884  0.017033874  0.077438655  0.144108339 -0.019799456  0.108385880 
         V66          V67          V68          V69          V70          V71 
 0.103736942  0.004811196 -0.119012879  0.000000000 -0.035076858  0.113740731 
         V72          V73          V74          V75          V76          V77 
-0.269959180  0.000000000 -0.036235441 -0.030461456  0.114342335  0.000000000 
         V78          V79          V80          V81          V82          V83 
 0.000000000  0.289835717  0.081288534  0.000000000  0.000000000  0.190897479 
         V84          V85          V86          V87          V88          V89 
 0.081607046 -0.049767212  0.000000000  0.000000000  0.000000000  0.000000000 
         V90          V91          V92          V93          V94          V95 
 0.000000000 -0.060811540 -0.152708209  0.031180994  0.000000000 -0.059677496 
         V96          V97          V98          V99         V100 
 0.082388149  0.078917823 -0.100503544  0.000000000  0.000000000 
           0.02     0.0113
[1,] -1.8073413 -2.5292001
[2,] -2.2936428 -2.3138563
[3,]  1.4281342  1.6805180
[4,] -0.3484044 -0.3262062
[5,]  2.0304327  1.7797261

hqreg documentation built on May 1, 2019, 10:21 p.m.