View source: R/dCVnet_nullperformance.R
performance_with_permuted_labels | R Documentation |
Reruns a dCVnet model with permuted y-values. This breaks any link between outcome (y) and features. These results can be used in a permutation test to verify that dCVnet's double (nested) cross-validation is not leaking information producing an over-estimate of performance.
performance_with_permuted_labels(x, n_times = 5)
x |
a dCVnet model. |
n_times |
number of times to permute labels and obtain performance measures |
a named list containing the observed (original) performance measures extracted from x, and a table of equivalent measures obtained under permutation of the outcome variable.
This code will repeatedly run dCVnet (an already time-consuming program). It should be expected to be extremely time-consuming.
Further, in most cases evaluating permuted performance is not necessary. Assessing performance with permuted labels is only intended to confirm that the implementation of cross-validation in dCVnet does not leak information.
## Not run:
siris <- droplevels(subset(iris, iris$Species != "versicolor"))
siris[,1:4] <- scale(siris[,1:4])
set.seed(1)
model <- dCVnet(y = siris$Species,
f = ~ Sepal.Length + Sepal.Width +
Petal.Length + Petal.Width,
data = siris, nrep_outer = 3, nrep_inner = 3,
alphalist = c(0.5),
opt.lambda.type = "1se")
nullresult <- performance_with_permuted_labels(model, n_times = 3)
# consider the AUROC for the null:
range(unlist(nullresult$permuted["AUROC",]))
# [1] 0.3627333 0.5235333
# vs. the observed value:
nullresult$observed["AUROC"]
# AUROC
# 1
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.