boot_sis | R Documentation |
The algorithm first conducts 10 repetitions of a 200 iterations bootstrap. Then, it checks whether the standard error of the mean of those 10 estimations for both upper and lower confidence intervals is lower than the 5 the bootstrap sample is adequate and we use the constructed sample to calculate upper and lower confidence intervals. If the condition is not met for more than 5 then the variability is too high and we need to increase the number of bootstrap repetitions. Thus, the process of testing 200 bootstrap samples 10 times is repeated and added to the previous bootstrap estimates. This process is repeated until the condition is met for more than 95
boot_sis(
x,
y,
family,
penalty,
sig = 0.05,
covars,
probs = c(0.1, 0.9),
parallel = TRUE
)
x |
The design matrix, of dimensions n * p, without an intercept. Each row is an observation vector. |
y |
The response vector of dimension n * 1. Quantitative for
|
family |
Response type (see above). |
penalty |
The penalty to be applied in the regularized likelihood subproblems. 'SCAD' (the default), 'MCP', 'lasso', 'enet' (elastic-net), 'msaenet' (multi-step adaptive elastic-net) and 'aenet' (adaptive elastic-net) are provided. |
sig |
significance threshold for the confidence interval |
covars |
factor covariate names |
probs |
Quantiles to compare for the effect estimation. By default quantiles are 10th and 90th |
parallel |
Specifies whether to conduct parallel computing. If TRUE, it uses the parameter |
Returns an object with
coefCoefficient
CI_lowLower confidence interval of the coefficient
CI_upUpper confidence interval of the coefficient
EstEffect estimate (mean difference for gaussian family, hazard ratio for Cox
family and odds ratio for binomial family) when comparing the two specified quantiles in probs
CI_low_percLower confidence interval of the effect estimate when comparing the quantiles specified in probs
CI_up_percUpper confidence interval of the effect estimate when comparing the quantiles specified in probs
Jianqing Fan, Yang Feng, Diego Franco Saldana, Richard Samworth, Arce Domingo-Relloso and Yichao Wu
Jerome Friedman and Trevor Hastie and Rob Tibshirani (2010) Regularization Paths for Generalized Linear Models Via Coordinate Descent. Journal of Statistical Software, 33(1), 1-22.
Noah Simon and Jerome Friedman and Trevor Hastie and Rob Tibshirani (2011) Regularization Paths for Cox's Proportional Hazards Model Via Coordinate Descent. Journal of Statistical Software, 39(5), 1-13.
Patrick Breheny and Jian Huang (2011) Coordiante Descent Algorithms for Nonconvex Penalized Regression, with Applications to Biological Feature Selection. The Annals of Applied Statistics, 5, 232-253.
Hirotogu Akaike (1973) Information Theory and an Extension of the Maximum Likelihood Principle. In Proceedings of the 2nd International Symposium on Information Theory, BN Petrov and F Csaki (eds.), 267-281.
Gideon Schwarz (1978) Estimating the Dimension of a Model. The Annals of Statistics, 6, 461-464.
Jiahua Chen and Zehua Chen (2008) Extended Bayesian Information Criteria for Model Selection with Large Model Spaces. Biometrika, 95, 759-771.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.