cv_varsel | R Documentation |
Run the search part and the evaluation part for a projection predictive
variable selection. The search part determines the predictor ranking (also
known as solution path), i.e., the best submodel for each submodel size
(number of predictor terms). The evaluation part determines the predictive
performance of the submodels along the predictor ranking. In contrast to
varsel()
, cv_varsel()
performs a cross-validation (CV) by running the
search part with the training data of each CV fold separately (an exception
is explained in section "Note" below) and by running the evaluation part on
the corresponding test set of each CV fold. A special method is
cv_varsel.vsel()
because it re-uses the search results from an earlier
cv_varsel()
(or varsel()
) run, as illustrated in the main vignette.
cv_varsel(object, ...)
## Default S3 method:
cv_varsel(object, ...)
## S3 method for class 'vsel'
cv_varsel(
object,
cv_method = object$cv_method %||% "LOO",
nloo = object$nloo,
K = object$K %||% if (!inherits(object, "datafit")) 5 else 10,
cvfits = object$cvfits,
validate_search = object$validate_search %||% TRUE,
...
)
## S3 method for class 'refmodel'
cv_varsel(
object,
method = "forward",
cv_method = if (!inherits(object, "datafit")) "LOO" else "kfold",
ndraws = NULL,
nclusters = 20,
ndraws_pred = 400,
nclusters_pred = NULL,
refit_prj = !inherits(object, "datafit"),
nterms_max = NULL,
penalty = NULL,
verbose = getOption("projpred.verbose", interactive()),
nloo = if (cv_method == "LOO") object$nobs else NULL,
K = if (!inherits(object, "datafit")) 5 else 10,
cvfits = object$cvfits,
search_control = NULL,
lambda_min_ratio = 1e-05,
nlambda = 150,
thresh = 1e-06,
validate_search = TRUE,
seed = NA,
search_terms = NULL,
search_out = NULL,
parallel = getOption("projpred.prll_cv", FALSE),
...
)
object |
An object of class |
... |
For |
cv_method |
The CV method, either |
nloo |
Only relevant if |
K |
Only relevant if |
cvfits |
Only relevant if |
validate_search |
A single logical value indicating whether to
cross-validate also the search part, i.e., whether to run the search
separately for each CV-fold ( |
method |
The method for the search part. Possible options are
|
ndraws |
Number of posterior draws used in the search part. Ignored if
|
nclusters |
Number of clusters of posterior draws used in the search
part. Ignored in case of L1 search (because L1 search always uses a single
cluster). For the meaning of |
ndraws_pred |
Only relevant if |
nclusters_pred |
Only relevant if |
refit_prj |
For the evaluation part, should the projections onto the
submodels along the predictor ranking be performed again using |
nterms_max |
Maximum submodel size (number of predictor terms) up to
which the search is continued. If |
penalty |
Only relevant for L1 search. A numeric vector determining the
relative penalties or costs for the predictors. A value of |
verbose |
A single logical value indicating whether to print out additional information during the computations. |
search_control |
A
|
lambda_min_ratio |
Deprecated (please use |
nlambda |
Deprecated (please use |
thresh |
Deprecated (please use |
seed |
Pseudorandom number generation (PRNG) seed by which the same
results can be obtained again if needed. Passed to argument |
search_terms |
Only relevant for forward search. A custom character
vector of predictor term blocks to consider for the search. Section
"Details" below describes more precisely what "predictor term block" means.
The intercept ( |
search_out |
Intended for internal use. |
parallel |
A single logical value indicating whether to run costly parts
of the CV in parallel ( |
Arguments ndraws
, nclusters
, nclusters_pred
, and ndraws_pred
are automatically truncated at the number of posterior draws in the
reference model (which is 1
for datafit
s). Using less draws or clusters
in ndraws
, nclusters
, nclusters_pred
, or ndraws_pred
than posterior
draws in the reference model may result in slightly inaccurate projection
performance. Increasing these arguments affects the computation time
linearly.
For argument method
, there are some restrictions: For a reference model
with multilevel or additive formula terms or a reference model set up for
the augmented-data projection, only the forward search is available.
Furthermore, argument search_terms
requires a forward search to take
effect.
L1 search is faster than forward search, but forward search may be more
accurate. Furthermore, forward search may find a sparser model with
comparable performance to that found by L1 search, but it may also
overfit when more predictors are added. This overfit can be detected
by running search validation (see cv_varsel()
).
An L1 search may select an interaction term before all involved lower-order interaction terms (including main-effect terms) have been selected. In projpred versions > 2.6.0, the resulting predictor ranking is automatically modified so that the lower-order interaction terms come before this interaction term, but if this is conceptually undesired, choose the forward search instead.
The elements of the search_terms
character vector don't need to be
individual predictor terms. Instead, they can be building blocks consisting
of several predictor terms connected by the +
symbol. To understand how
these building blocks work, it is important to know how projpred's
forward search works: It starts with an empty vector chosen
which will
later contain already selected predictor terms. Then, the search iterates
over model sizes j \in \{0, ..., J\}
(with J
denoting the maximum submodel size, not counting the intercept). The
candidate models at model size j
are constructed from those elements
from search_terms
which yield model size j
when combined with the
chosen
predictor terms. Note that sometimes, there may be no candidate
models for model size j
. Also note that internally, search_terms
is
expanded to include the intercept ("1"
), so the first step of the search
(model size 0) always consists of the intercept-only model as the only
candidate.
As a search_terms
example, consider a reference model with formula y ~ x1 + x2 + x3
. Then, to ensure that x1
is always included in the
candidate models, specify search_terms = c("x1", "x1 + x2", "x1 + x3", "x1 + x2 + x3")
(or, in a simpler way that leads to the same results,
search_terms = c("x1", "x1 + x2", "x1 + x3")
, for which helper function
force_search_terms()
exists). This search would start with y ~ 1
as the
only candidate at model size 0. At model size 1, y ~ x1
would be the only
candidate. At model size 2, y ~ x1 + x2
and y ~ x1 + x3
would be the
two candidates. At the last model size of 3, y ~ x1 + x2 + x3
would be
the only candidate. As another example, to exclude x1
from the search,
specify search_terms = c("x2", "x3", "x2 + x3")
(or, in a simpler way
that leads to the same results, search_terms = c("x2", "x3")
).
An object of class vsel
. The elements of this object are not meant
to be accessed directly but instead via helper functions (see the main
vignette and projpred-package).
If validate_search
is FALSE
, the search is not included in the CV
so that only a single full-data search is run. If the number of
observations is big, the fast PSIS-LOO-CV along the full-data search path
is likely to be accurate. If the number of observations is small or
moderate, the fast PSIS-LOO-CV along the full-data search path is likely to
have optimistic bias in the middle of the search path. This result can be
used to guide further actions and the optimistic bias can be greatly
reduced by using validate_search = TRUE
.
PSIS uses Pareto-\hat{k}
diagnostic to assess the reliability of
PSIS-LOO-CV. Whether the Pareto-\hat{k}
diagnostics are shown as
warnings, is controlled with a global option projpred.warn_psis
(default
is TRUE
). See loo::loo-glossary for how to interpret the
Pareto-\hat{k}
values and the warning thresholds. projpred does
not support the usually recommended moment-matching (see
loo::loo_moment_match()
and brms::loo_moment_match()
), mixture
importance sampling (vignette("loo2-mixis", package="loo")
), or
reloo
-ing (brms::reloo()
). If the reference model PSIS-LOO-CV
Pareto-\hat{k}
values are good, but there are high
Pareto-\hat{k}
values for the projected models, you can try
increasing the number of draws used for the PSIS-LOO-CV (ndraws
in case
of refit_prj = FALSE
; ndraws_pred
in case of refit_prj = TRUE
). If
increasing the number of draws does not help and if the reference model
PSIS-LOO-CV Pareto-\hat{k}
values are high, and the reference model
PSIS-LOO-CV results change substantially when using moment-matching,
mixture importance sampling, or reloo
-ing, we recommend to use
K
-fold-CV within projpred
.
For PSIS-LOO-CV, projpred calls loo::psis()
(or, exceptionally,
loo::sis()
, see below) with r_eff = NA
. This is only a problem if there
was extreme autocorrelation between the MCMC iterations when the reference
model was built. In those cases however, the reference model should not
have been used anyway, so we don't expect projpred's r_eff = NA
to
be a problem.
PSIS cannot be used if the number of draws or clusters is too small. In
such cases, projpred resorts to standard importance sampling (SIS)
and shows a message about this. Throughout the documentation, the term
"PSIS" is used even though in fact, projpred resorts to SIS in these
special cases. If SIS is used, check that the reference model PSIS-LOO-CV
Pareto-\hat{k}
values are good.
With parallel = TRUE
, costly parts of projpred's CV can be run in
parallel. Costly parts are the fold-wise searches and performance
evaluations in case of validate_search = TRUE
. (Note that in case of
K
-fold CV, the K
reference model refits are not affected by
argument parallel
; only projpred's CV is affected.) The
parallelization is powered by the foreach package. Thus, any parallel
(or sequential) backend compatible with foreach can be used, e.g.,
the backends from packages doParallel, doMPI, or
doFuture. For GLMs, this CV parallelization should work reliably, but
for other models (such as GLMMs), it may lead to excessive memory usage
which in turn may crash the R session (on Unix systems, setting an
appropriate memory limit via unix::rlimit_as()
may avoid crashing the
whole machine). However, the problem of excessive memory usage is less
pronounced for the CV parallelization than for the projection
parallelization described in projpred-package. In that regard, the CV
parallelization is recommended over the projection parallelization.
Måns Magnusson, Michael Riis Andersen, Johan Jonasson, Aki Vehtari (2020). Leave-one-out cross-validation for Bayesian model comparison in large data. Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR 108:341-351.
Aki Vehtari, Andrew Gelman, and Jonah Gabry (2017). Practical Bayesian Model Evaluation Using Leave-One-Out Cross-Validation and WAIC. Statistics and Computing, 27(5):1413–32. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/s11222-016-9696-4")}.
Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, and Jonah Gabry (2024). Pareto smoothed importance sampling. Journal of Machine Learning Research, 25(72):1-58.
varsel()
# Data:
dat_gauss <- data.frame(y = df_gaussian$y, df_gaussian$x)
# The `stanreg` fit which will be used as the reference model (with small
# values for `chains` and `iter`, but only for technical reasons in this
# example; this is not recommended in general):
fit <- rstanarm::stan_glm(
y ~ X1 + X2 + X3 + X4 + X5, family = gaussian(), data = dat_gauss,
QR = TRUE, chains = 2, iter = 1000, refresh = 0, seed = 9876
)
# Run cv_varsel() (with L1 search and small values for `K`, `nterms_max`, and
# `nclusters_pred`, but only for the sake of speed in this example; this is
# not recommended in general):
cvvs <- cv_varsel(fit, method = "L1", cv_method = "kfold", K = 2,
nterms_max = 3, nclusters_pred = 10, seed = 5555)
# Now see, for example, `?print.vsel`, `?plot.vsel`, `?suggest_size.vsel`,
# and `?ranking` for possible post-processing functions.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.