sp_vim  R Documentation 
Compute estimates and confidence intervals for the SPVIMs, using crossfitting.
sp_vim( Y = NULL, X = NULL, V = 5, type = "r_squared", SL.library = c("SL.glmnet", "SL.xgboost", "SL.mean"), univariate_SL.library = NULL, gamma = 1, alpha = 0.05, delta = 0, na.rm = FALSE, stratified = FALSE, verbose = FALSE, sample_splitting = TRUE, final_point_estimate = "split", C = rep(1, length(Y)), Z = NULL, ipc_scale = "identity", ipc_weights = rep(1, length(Y)), ipc_est_type = "aipw", scale = "identity", scale_est = TRUE, cross_fitted_se = TRUE, ... )
Y 
the outcome. 
X 
the covariates. If 
V 
the number of folds for crossfitting, defaults to 5. If

type 
the type of importance to compute; defaults to

SL.library 
a character vector of learners to pass to

univariate_SL.library 
(optional) a character vector of learners to
pass to 
gamma 
the fraction of the sample size to use when sampling subsets
(e.g., 
alpha 
the level to compute the confidence interval at. Defaults to 0.05, corresponding to a 95% confidence interval. 
delta 
the value of the δnull (i.e., testing if importance < δ); defaults to 0. 
na.rm 
should we remove NAs in the outcome and fitted values
in computation? (defaults to 
stratified 
if run_regression = TRUE, then should the generated folds be stratified based on the outcome (helps to ensure class balance across crossvalidation folds) 
verbose 
should 
sample_splitting 
should we use samplesplitting to estimate the full and
reduced predictiveness? Defaults to 
final_point_estimate 
if sample splitting is used, should the final point estimates
be based on only the samplesplit folds used for inference ( 
C 
the indicator of coarsening (1 denotes observed, 0 denotes unobserved). 
Z 
either (i) NULL (the default, in which case the argument

ipc_scale 
what scale should the inverse probability weight correction be applied on (if any)? Defaults to "identity". (other options are "log" and "logit") 
ipc_weights 
weights for the computed influence curve (i.e., inverse probability weights for coarsenedatrandom settings). Assumed to be already inverted (i.e., ipc_weights = 1 / [estimated probability weights]). 
ipc_est_type 
the type of procedure used for coarsenedatrandom
settings; options are "ipw" (for inverse probability weighting) or
"aipw" (for augmented inverse probability weighting).
Only used if 
scale 
should CIs be computed on original ("identity") or another scale? (options are "log" and "logit") 
scale_est 
should the point estimate be scaled to be greater than or equal to 0?
Defaults to 
cross_fitted_se 
should we use crossfitting to estimate the standard
errors ( 
... 
other arguments to the estimation tool, see "See also". 
We define the SPVIM as the weighted average of the population difference in predictiveness over all subsets of features not containing feature j.
This is equivalent to finding the solution to a population weighted least squares problem. This key fact allows us to estimate the SPVIM using weighted least squares, where we first sample subsets from the power set of all possible features using the Shapley sampling distribution; then use crossfitting to obtain estimators of the predictiveness of each sampled subset; and finally, solve the least squares problem given in Williamson and Feng (2020).
See the paper by Williamson and Feng (2020) for more details on the mathematics behind this function, and the validity of the confidence intervals.
In the interest of transparency, we return most of the calculations
within the vim
object. This results in a list containing:
the library of learners passed to SuperLearner
the estimated predictiveness measure for each sampled subset
the fitted values on the entire dataset from the chosen method for each sampled subset
the crossfitted predicted values from the chosen method for each sampled subset
the estimated SPVIM value for each feature
the influence functions for each sampled subset
the contibutions to the variance from estimating predictiveness
the contributions to the variance from sampling subsets
a list of the SPVIM influence function contributions
the standard errors for the estimated variable importance
the (1α) \times 100% confidence intervals based on the variable importance estimates
pvalues for the null hypothesis test of zero importance for each variable
the test statistic for each null hypothesis test of zero importance
a hypothesis testing decision for each null hypothesis test (for each variable having zero importance)
the fraction of the sample size used when sampling subsets
the level, for confidence interval calculation
the delta
value used for hypothesis testing
the outcome
the weights
the scale on which CIs were computed
 a tibble with the estimates, SEs, CIs, hypothesis testing decisions, and pvalues
An object of class vim
. See Details for more information.
SuperLearner
for specific usage of the
SuperLearner
function and package.
n < 100 p < 2 # generate the data x < data.frame(replicate(p, stats::runif(n, 5, 5))) # apply the function to the x's smooth < (x[,1]/5)^2*(x[,1]+7)/5 + (x[,2]/3)^2 # generate Y ~ Normal (smooth, 1) y < as.matrix(smooth + stats::rnorm(n, 0, 1)) # set up a library for SuperLearner; note simple library for speed library("SuperLearner") learners < c("SL.glm") #  # using Super Learner (with a small number of CV folds, # for illustration only) #  set.seed(4747) est < sp_vim(Y = y, X = x, V = 2, type = "r_squared", SL.library = learners, alpha = 0.05)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.