Description Usage Arguments Details Value Author(s) References See Also Examples
View source: R/ncpen_cpp_wrap.R
The function returns controlled samples and tuning parameters for ncpen
by eliminating unnecessary errors.
1 2 3 4 5 6 | control.ncpen(y.vec, x.mat, family = c("gaussian", "binomial", "poisson",
"multinomial", "cox"), penalty = c("scad", "mcp", "tlp", "lasso",
"classo", "ridge", "sridge", "mbridge", "mlog"), x.standardize = TRUE,
intercept = TRUE, lambda = NULL, n.lambda = NULL,
r.lambda = NULL, w.lambda = NULL, gamma = NULL, tau = NULL,
alpha = NULL, aiter.max = 100, b.eps = 1e-07)
|
y.vec |
(numeric vector) response vector.
Must be 0,1 for |
x.mat |
(numeric matrix) design matrix without intercept.
The censoring indicator must be included at the last column of the design matrix for |
family |
(character) regression model. Supported models are
|
penalty |
(character) penalty function.
Supported penalties are
|
x.standardize |
(logical) whether to standardize |
intercept |
(logical) whether to include an intercept in the model. |
lambda |
(numeric vector) user-specified sequence of |
n.lambda |
(numeric) the number of |
r.lambda |
(numeric) ratio of the smallest |
w.lambda |
(numeric vector) penalty weights for each coefficient (see references). If a penalty weight is set to 0, the corresponding coefficient is always nonzero. |
gamma |
(numeric) additional tuning parameter for controlling shrinkage effect of |
tau |
(numeric) concavity parameter of the penalties (see reference).
Default is 3.7 for |
alpha |
(numeric) ridge effect (weight between the penalty and ridge penalty) (see details).
Default value is 1. If penalty is |
aiter.max |
(numeric) maximum number of iterations in CD algorithm. |
b.eps |
(numeric) convergence threshold for coefficients vector. |
The function is used internal purpose but useful when users want to extract proper tuning parameters for ncpen
.
Do not supply the samples from control.ncpen
into ncpen
or cv.ncpen
directly to avoid unexpected errors.
An object with S3 class ncpen
.
y.vec |
response vector. |
x.mat |
design matrix adjusted to supplied options such as family and intercept. |
family |
regression model. |
penalty |
penalty. |
x.standardize |
whether to standardize |
intercept |
whether to include the intercept. |
std |
scale factor for |
lambda |
lambda values for the analysis. |
n.lambda |
the number of |
r.lambda |
ratio of the smallest |
w.lambda |
penalty weights for each coefficient. |
gamma |
additional tuning parameter for controlling shrinkage effect of |
tau |
concavity parameter of the penalties (see references). |
alpha |
ridge effect (amount of ridge penalty). see details. |
Dongshin Kim, Sunghoon Kwon, Sangin Lee
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96, 1348-60. Zhang, C.H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of statistics, 38(2), 894-942. Shen, X., Pan, W., Zhu, Y. and Zhou, H. (2013). On constrained and regularized high-dimensional regression. Annals of the Institute of Statistical Mathematics, 65(5), 807-832. Kwon, S., Lee, S. and Kim, Y. (2016). Moderately clipped LASSO. Computational Statistics and Data Analysis, 92C, 53-67. Kwon, S. Kim, Y. and Choi, H.(2013). Sparse bridge estimation with a diverging number of parameters. Statistics and Its Interface, 6, 231-242. Huang, J., Horowitz, J.L. and Ma, S. (2008). Asymptotic properties of bridge estimators in sparse high-dimensional regression models. The Annals of Statistics, 36(2), 587-613. Zou, H. and Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Annals of statistics, 36(4), 1509. Lee, S., Kwon, S. and Kim, Y. (2016). A modified local quadratic approximation algorithm for penalized optimization problems. Computational Statistics and Data Analysis, 94, 275-286.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | ### linear regression with scad penalty
sam = sam.gen.ncpen(n=200,p=10,q=5,cf.min=0.5,cf.max=1,corr=0.5)
x.mat = sam$x.mat; y.vec = sam$y.vec
tun = control.ncpen(y.vec=y.vec,x.mat=x.mat,n.lambda=10,tau=1)
tun$tau
### multinomial regression with sridge penalty
sam = sam.gen.ncpen(n=200,p=10,q=5,k=3,cf.min=0.5,cf.max=1,corr=0.5,family="multinomial")
x.mat = sam$x.mat; y.vec = sam$y.vec
tun = control.ncpen(y.vec=y.vec,x.mat=x.mat,n.lambda=10,
family="multinomial",penalty="sridge",gamma=10)
### cox regression with mcp penalty
sam = sam.gen.ncpen(n=200,p=10,q=5,r=0.2,cf.min=0.5,cf.max=1,corr=0.5,family="cox")
x.mat = sam$x.mat; y.vec = sam$y.vec
tun = control.ncpen(y.vec=y.vec,x.mat=x.mat,n.lambda=10,family="cox",penalty="scad")
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.