eval_cv | R Documentation |
Create a cross validation evaluator
eval_cv(
nfolds = 5,
ntrials = 1,
conf_type = c("norm", "perc"),
contrasts = FALSE
)
nfolds |
integer. number of cv folds |
ntrials |
integer. number of cv trials to run |
conf_type |
string. How to calculate confidence interval of performance metrics across trials: 'norm' calcualtes std err using the 'sd' function, 'perc' calculats lower and upper conf values using the 'quantile' function. |
contrasts |
logical. Whether to compare test performance of fits within each group-outcome-stat combination (i.e., between predictors). This will result in a p-value for each model comparison as the proporiton of trials where one model had a lower performance than another model. Thus, a p-value of 0.05 indicates that one model performed worse than the other model 5% of the trials. If ntrials == 1, then this value can only be 0 or 1 to indicate which model is better. |
aba model
data <- adnimerge %>% dplyr::filter(VISCODE == 'bl')
model <- aba_model() %>%
set_data(data) %>%
set_groups(everyone()) %>%
set_outcomes(ConvertedToAlzheimers, CSF_ABETA_STATUS_bl) %>%
set_predictors(
PLASMA_ABETA_bl, PLASMA_PTAU181_bl, PLASMA_NFL_bl,
c(PLASMA_ABETA_bl, PLASMA_PTAU181_bl, PLASMA_NFL_bl)
) %>%
set_stats('glm') %>%
set_evals('cv') %>%
fit()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.