Description Usage Arguments Details Value Warning Author(s) References See Also Examples

Fit linear, logistic and Cox models regularized with L0, lasso (L1), elastic-net (L1 and L2), or net (L1 and Laplacian) penalty, and their adaptive forms, such as adaptive lasso / elastic-net and net adjusting for signs of linked coefficients.

It solves L0 penalty problem by simultaneously selecting regularization parameters and performing hard-thresholding (or selecting number of non-zeros). This augmented and penalized minimization method provides an approximation solution to the L0 penalty problem and runs as fast as L1 regularization.

The function uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients. It could deal with very high dimensional data.

1 2 3 4 5 | ```
APML0(x, y, family=c("gaussian", "binomial", "cox"), penalty=c("Lasso","Enet", "Net"),
Omega=NULL, alpha=1.0, lambda=NULL, nlambda=50, rlambda=NULL, wbeta=rep(1,ncol(x)),
sgn=rep(1,ncol(x)), nfolds=1, foldid=NULL, ill=TRUE, iL0=TRUE, icutB=FALSE, ncutB=10,
ifast=TRUE, isd=FALSE, iysd=FALSE, ifastr=TRUE, keep.beta=FALSE,
thresh=1e-6, maxit=1e+5, threshC=1e-5, maxitC=1e+2, threshP=1e-5)
``` |

`x` |
input matrix. Each row is an observation vector. |

`y` |
response variable. For |

`family` |
type of outcome. Can be "gaussian", "binomial" or "cox". |

`penalty` |
penalty type. Can choose
where |

`Omega` |
adjacency matrix with zero diagonal and non-negative off-diagonal, used for |

`alpha` |
ratio between L1 and Laplacian for |

`lambda` |
a user supplied decreasing sequence. If |

`nlambda` |
number of |

`rlambda` |
fraction of |

`wbeta` |
penalty weights used with L1 penalty (adaptive L1), given by |

`sgn` |
sign adjustment used with Laplacian penalty (adaptive Laplacian). The |

`nfolds` |
number of folds. With |

`foldid` |
an optional vector of values between 1 and |

`ill` |
logical flag for using likelihood-based as the cross-validation criteria. Default is |

`iL0` |
logical flag for simultaneously performing L0-norm via performing hard-thresholding or selecting number of non-zeros. Default is |

`icutB` |
logical flag for performing hard-thresholding by selecting the number of non-zero coefficients with the default of |

`ncutB` |
the number of thresholds used for |

`ifast` |
logical flag for searching for the best cutoff or the number of non-zero. Default is |

`isd` |
logical flag for outputting standardized coefficients. |

`iysd` |
logical flag for standardizing |

`ifastr` |
logical flag for efficient calculation of risk set updates for |

`keep.beta` |
logical flag for returning estimates for all |

`thresh` |
convergence threshold for coordinate descent. Default value is |

`maxit` |
Maximum number of iterations for coordinate descent. Default is |

`threshC` |
convergence threshold for hard-thresholding for |

`maxitC` |
Maximum number of iterations for hard-thresholding for |

`threshP` |
Cutoff when calculating the probability in |

One-step coordinate descent algorithm is applied for each `lambda`

. Cross-validation is used for tuning parameters. For `iL0 = TRUE`

, we further perform hard-thresholding (for `icutB=TRUE`

) to the coefficients or select the number of non-zero coefficients (for `icutB=FALSE`

), which is obtained from regularized model at each `lambda`

. This is motivated by formulating L0 variable selection in an augmented form, which shows significant improvement over the commonly used regularized methods without this technique. Details could be found in our publication.

`x`

is always standardized prior to fitting the model and the estimate is returned on the original scale for `isd=FALSE`

.

Each one element of `wbeta`

corresponds to each variable in `x`

. Setting the value in `wbeta`

will not impose any penalty on that variable.

For `family = "cox"`

, `ifastr = TRUE`

adopts an efficient way to update risk set and sometimes the algorithm ends before all `nlambda`

values of `lambda`

have been evaluated. To evaluate small values of `lambda`

, use `ifast = FALSE`

. The two methods only affect the efficiency of algorithm, not the estimates.

`ifast = TRUE`

seems to perform well.

An object with S3 class `"APML0"`

.

`a` |
the intercept for |

`Beta` |
a sparse Matrix of coefficients, stored in class "dgCMatrix". For |

`Beta0` |
coefficients after additionally performing L0-norm for |

`fit` |
a data.frame containing |

`fit0` |
a data.frame containing |

`lambda.min` |
value of |

`lambda.opt` |
value of |

`penalty` |
penalty type. |

`adaptive` |
logical flags for adaptive version (see above). |

`flag` |
convergence flag (for internal debugging). |

It may terminate and return `NULL`

.

Xiang Li, Shanghong Xie, Donglin Zeng and Yuanjia Wang

Maintainer: Xiang Li <spiritcoke@gmail.com>

Li, X., Xie, S., Zeng, D., Wang, Y. (2018).
*Efficient l0-norm feature selection based on augmented and penalized minimization. Statistics in medicine, 37(3), 473-486.*

https://onlinelibrary.wiley.com/doi/full/10.1002/sim.7526

Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J. (2011).
*Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 1-122.*

http://dl.acm.org/citation.cfm?id=2185816

Friedman, J., Hastie, T., Tibshirani, R. (2010).
*Regularization paths for generalized linear models via coordinate descent, Journal of Statistical Software, Vol. 33(1), 1.*

http://www.jstatsoft.org/v33/i01/

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ```
### Linear model ###
set.seed(1213)
N=100;p=30;p1=5
x=matrix(rnorm(N*p),N,p)
beta=rnorm(p1)
xb=x[,1:p1]%*%beta
y=rnorm(N,xb)
fiti=APML0(x,y,penalty="Lasso",nlambda=10) # Lasso
fiti2=APML0(x,y,penalty="Lasso",nlambda=10,nfolds=10) # Lasso
# attributes(fiti)
### Logistic model ###
set.seed(1213)
N=100;p=30;p1=5
x=matrix(rnorm(N*p),N,p)
beta=rnorm(p1)
xb=x[,1:p1]%*%beta
y=rbinom(n=N, size=1, prob=1.0/(1.0+exp(-xb)))
fiti=APML0(x,y,family="binomial",penalty="Lasso",nlambda=10) # Lasso
fiti2=APML0(x,y,family="binomial",penalty="Lasso",nlambda=10,nfolds=10) # Lasso
# attributes(fiti)
### Cox model ###
set.seed(1213)
N=100;p=30;p1=5
x=matrix(rnorm(N*p),N,p)
beta=rnorm(p1)
xb=x[,1:p1]%*%beta
ty=rexp(N, exp(xb))
td=rexp(N, 0.05)
tcens=ifelse(td<ty,1,0) # censoring indicator
y=cbind(time=ty,status=1-tcens)
fiti=APML0(x,y,family="cox",penalty="Lasso",nlambda=10) # Lasso
fiti2=APML0(x,y,family="cox",penalty="Lasso",nlambda=10,nfolds=10) # Lasso
# attributes(fiti)
``` |

```
Loading required package: Matrix
```

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.