Description Usage Arguments Details Value Examples
This is a function to compute the regression coefficients for robust penalized regression. The exponential squared loss function is used to provide the robustness, and penalization is employed to enforce the sparsity of estimators.
1 2 3 |
x |
explanotory variable matrix of size n * p. |
y |
response variable vector of length n. |
delta |
censoring indicator vector of length n. |
n_lambda |
number of tuning parameters λ to use in the penalized regression. Default is 20. |
lambda0 |
The user-specified largest value for tuning parameter λ. The default will be the λ such that all regression coefficients equal zero. |
kappa |
Tuning parameter in the MCP penalty. |
theta |
Vector of robust tuning parameters.
|
eps |
Convergence threshold. The algorithm iterates until the change in any coefficient is less than eps. Default is 1e-5. |
max.iter |
The maximum number of outer layer iterations. Default is 100. |
max.cd |
The maximum number of inner layer iterations for the coordinate descent algorithm. Default is 5. |
penalty |
Penalty function. Values could be "LASSO" or "MCP". Default is "MCP". |
init |
User specified initial values for regression coefficients. If not specified, default value is a length p vector of zeros. |
The sequence of models indexed by the regularization parameter
lambda is fit using a coordinate descent algorithm. The loss function
is
ω exp(-(y - xβ) ^ 2/(2θ)) - λ|β|_1,
where the first term is the exponential squared loss, and the second term is the Lasso penalty. ω is the Kaplan-Meier weight. To find the maximizer of the loss function, an two-layer MM coordinate descent (MMCD) algorithm is used. The outer layer of the MMCD algorithm is an MM algorithm, in which the exponential term is approximated by a weighted least square. The main contribution of this layer is that it uses the minorization technique to solve the maximization problem using relatively simple approximations in an iterative manner. The inner layer of the MMCD algorithm contains a coordinate descent algorithm. The goal is to find a good enough solution to the minorized penalized regression problem. Because the original loss function is approximated iteratively, it might require a huge amount of time to compute the solution if each inner layer achieves convergence. As a result, the default maximum number of inner iterations is three.
The sequence of tuning parameters λ is
generated based on the largest value of lambda0 and the number of
λ n_lambda. The minimum of λ equals
lambda0 * 0.001, if p<n, and equals lambda0* 0.05, if
p>=n.
An object with S3 class "robint" containing:
betalasso |
The matrix of regression coefficients using Lasso penalty.
The size of the matrix is p * ( |
betamcp |
The matrix of regression coefficients using MCP penalty.
The size of the matrix is p * ( |
iter |
Vector of length |
lambda |
The vector of λ. |
kappa |
Tuning parameter in the MCP penalty. |
theta |
The vector of robust tuning paramether. |
omega |
The vector of Kaplan-Meier weights for each sample. |
1 2 3 4 5 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.