perf.fbroc.paired.roc: Calculate performance for paired bootstrapped ROC curves

Description Usage Arguments Note on partial AUC correction References Examples

Description

For a given metric this calculates the difference in performance between two paired predictors stored in an object of class fbroc.paired.roc in addition to their individual performance.

Usage

1
2
3
4
## S3 method for class 'fbroc.paired.roc'
perf(roc, metric = "auc", conf.level = 0.95,
  tpr = NULL, fpr = NULL, correct.partial.auc = TRUE,
  show.partial.auc.warning = TRUE, ...)

Arguments

roc

An object of class fbroc.paired.roc.

metric

A performance metric. Select "auc" for the AUC, "partial.auc" for the partial AUC, "tpr" for the TPR at a fixed FPR and "fpr" for the FPR at a fixed TPR.

conf.level

The confidence level of the confidence interval.

tpr

The fixed TPR at which the FPR is to be evaluated when fpr is selected as metric. If partial AUC is investigated, then an TPR interval over which the partial area is to be calculated.

fpr

The fixed FPR at which the TPR is to be evaluated when tpr is selected as metric. If partial AUC is investigated, then an FPR interval over which the partial area is to be calculated.

correct.partial.auc

Corrects partial AUC for easier interpretation using McClish correction. Details are given below. Defaults to TRUE.

show.partial.auc.warning

Whether to give warnings for partial AUCs below 0.5. Defaults to true.

...

Further arguments, that are not used at this time.

Note on partial AUC correction

The partial AUC is hard to interpret without considering the range on which it is calculated. Not only does the partial AUC scale with the width of the interval over which it is calculated, but it also depends on where the interval is located. For example, if the ROC Curve is integrated over the FPR interval [0, 0.1] a completely random and non-discrimate classifier would have a partial AUC of 0.05, but the same ROC curve integrated over the interval [0.9, 1] would yield a partial AUC of 0.95.

The correction by McClish produces a corrected partial AUC given by:

0.5 (1 + (partialAUC - auc.min) / (auc.max - auc.min))

Here auc.min is the AUC achieved by the non-discriminate classifier and auc.max is the AUC achieved by a perfect classifier. Thus, a non-discriminative classifier will always have an AUC of 0.5 and a perfect one classifier will always have a partial AUCs of 1.

Unfortunately, the corrected partial AUC cannot be interpreted in a meaningful way if the curve is below the non-discriminate classifier, producing corrected partial AUCs values below 0.5. For this reason, fbroc will give a warning if the bootstrap produces corrected partial AUC values below 0.5.

References

Donna Katzman McClish. (1989). Analyzing a Portion of the ROC Curve. Medical Decision Making, http://mdm.sagepub.com/content/9/3/190.abstract.

Examples

1
2
3
4
5
6
7
8
data(roc.examples)
example <- boot.paired.roc(roc.examples$Cont.Pred, roc.examples$Cont.Pred.Outlier,
                               roc.examples$True.Class, n.boot = 100)
perf(example, metric = "auc")   
# Get difference in TPR at a FPR of 20%   
perf(example, metric = "tpr", fpr = 0.2)    
perf(example, metric = "partial.auc", fpr = c(0, 0.25), 
     show.partial.auc.warning = FALSE)                       

fbroc documentation built on May 2, 2019, 11:39 a.m.