boot.paired.roc: Bootstrap paired ROC curves

Description Usage Arguments Value Caching Ties See Also Examples

Description

Given two numerical predictors for the same outcome on the same set of samples, this functions enables the bootstrapping of the paired ROC curves of the two prediction models. While bootstrapping the same set of samples are used for both curves in each iteration, preserving the correlation between the two models.

Usage

1
2
boot.paired.roc(pred1, pred2, true.class, stratify = TRUE,
  n.boot = 1000, use.cache = FALSE, tie.strategy = NULL)

Arguments

pred1

Numerical predictions for the first classifier.

pred2

Numerical predictions for the second classifier.

true.class

A logical vector. TRUE indicates the sample belonging to the positive class.

stratify

Logical. Indicates whether we use stratified bootstrap. Default to TRUE. Non-stratified bootstrap is not yet implemented.

n.boot

A number that will be coerced to integer. Specified the number of bootstrap replicates. Defaults to 1000.

use.cache

If true the bootstrapping results for the ROC curve will be pre-cached. This increases speed when the object is used often, but also takes up more memory.

tie.strategy

How to handle ties. See details below.

Value

A list of class fbroc.paired.roc, containing the elements:

prediction1

Input predictions for first model.

prediction2

Input predictions for second model.

true.class

Input classes.

n.thresholds1

Number of thresholds of the first predictor.

n.thresholds2

Number of thresholds of the second predictor.

n.boot

Number of bootstrap replicates.

use.cache

Indicates if cache is used for this ROC object.

tie.strategy

Used setting how to handle ties in predictors.

n.pos

Number of positive observations.

n.neg

Number of negative observations.

roc1

A data.frame containing the thresholds of the first ROC curve and the TPR and FPR at these thresholds.

roc2

A data.frame containing the thresholds of the second ROC curve and the TPR and FPR at these thresholds.

auc1

The AUC of the first ROC curve.

auc2

The AUC of the second ROC curve.

boot.tpr1

If the cache is enabled, a matrix containing the bootstrapped TPR at the thresholds for the first predictor.

boot.fpr1

If the cache is enabled, a matrix containing the bootstrapped FPR at the thresholds for the first predictor.

boot.tpr2

If the cache is enabled, a matrix containing the bootstrapped TPR at the thresholds for the second predictor.

boot.fpr2

If the cache is enabled, a matrix containing the bootstrapped FPR at the thresholds for the second predictor.

Caching

If you enable caching, boot.roc calculates the requested number of bootstrap samples and saves the TPR and FPR values for each iteration. This can take up a sizable portion of memory, but it speeds up subsequent operations. This can be useful if you plan to use the ROC curve multiple fbroc functions.

Ties

You can set this parameter to either 1 or 2. If your numerical predictor has no ties, both settings will produce the same results. If you set tie.strategy to 1 the ROC curve is built by connecting the TPR/FPR pairs for neighboring thresholds. A tie.strategy of 2 indicates that the TPR calculated at a specific FPR is the best TPR at a FPR smaller than or equal than the FPR specified. Defaults to 2.

See Also

boot.roc, plot.fbroc.paired.roc, perf.fbroc.paired.roc

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
data(roc.examples)
# Do not use cache
example <- boot.paired.roc(roc.examples$Cont.Pred, roc.examples$Cont.Pred.Outlier,
                          roc.examples$True.Class, n.boot = 500)
perf(example, "auc") # estimate difference in auc
perf(example, "tpr", fpr = 0.5) # estimate difference in TPR at a FPR of 50%
plot(example) # show plot
# Cached mode
example <- boot.paired.roc(roc.examples$Cont.Pred, roc.examples$Cont.Pred.Outlier,
                          roc.examples$True.Class, n.boot = 1000, use.cache = TRUE)
conf(example, conf.for = "tpr", steps = 10) # get confidence regions for TPR at FPR
conf(example, conf.for = "fpr", steps = 10) # get confidence regions for FPR at TPR
perf(example, "fpr", tpr = 0.9) # estimate difference in FPR at a TPR of 90%                     

Example output

Loading required package: ggplot2


                Bootstrapped ROC performance metric

Metric: AUC
Bootstrap replicates: 500

Classifier 1: 
Observed:0.929
Std. Error: 0.019
95% confidence interval:
0.889 0.962

Classifier 2: 
Observed:0.895
Std. Error: 0.026
95% confidence interval:
0.835 0.94

Delta: 
Observed:0.034
Std. Error: 0.019
95% confidence interval:
0 0.08

Correlation: 0.69



                Bootstrapped ROC performance metric

Metric: TPR at a fixed FPR of 0.5
Bootstrap replicates: 500

Classifier 1: 
Observed:0.988
Std. Error: 0.011
95% confidence interval:
0.962 1

Classifier 2: 
Observed:0.988
Std. Error: 0.012
95% confidence interval:
0.962 1

Delta: 
Observed:0
Std. Error: 0.006
95% confidence interval:
0 0.025

Correlation: 0.85

   FPR Delta.TPR Lower.Delta.TPR Upper.Delta.TPR
1  1.0    0.0000               0       0.0000000
2  0.9    0.0000               0       0.0000000
3  0.8    0.0000               0       0.0000000
4  0.7    0.0000               0       0.0000000
5  0.6    0.0000               0       0.0125000
6  0.5    0.0000               0       0.0128125
7  0.4    0.0000               0       0.0503125
8  0.3    0.0375               0       0.0875000
9  0.2    0.0000               0       0.1125000
10 0.1    0.1000               0       0.4003125
11 0.0    0.4000               0       0.6000000
   TPR Delta.FPR Lower.Delta.FPR Upper.Delta.FPR
1  1.0   -0.0125         -0.0500               0
2  0.9   -0.0250         -0.0625               0
3  0.8   -0.0250         -0.0750               0
4  0.7   -0.0375         -0.0750               0
5  0.6   -0.0375         -0.0875               0
6  0.5   -0.0375         -0.0875               0
7  0.4   -0.0375         -0.0875               0
8  0.3   -0.0375         -0.0875               0
9  0.2   -0.0375         -0.0875               0
10 0.1   -0.0375         -0.0875               0
11 0.0    0.0000          0.0000               0


                Bootstrapped ROC performance metric

Metric: FPR at a fixed TPR of 0.9
Bootstrap replicates: 1000

Classifier 1: 
Observed:0.262
Std. Error: 0.079
95% confidence interval:
0.088 0.375

Classifier 2: 
Observed:0.288
Std. Error: 0.079
95% confidence interval:
0.112 0.4

Delta: 
Observed:-0.025
Std. Error: 0.018
95% confidence interval:
-0.063 0

Correlation: 0.98

fbroc documentation built on May 2, 2019, 11:39 a.m.

Related to boot.paired.roc in fbroc...