AUC-package: Threshold independent performance measures for probabilistic...

Description Details Author(s) References See Also Examples

Description

Summary and plotting functions for threshold independent performance measures for probabilistic classifiers.

Details

This package includes functions to compute the area under the curve (function auc) of selected measures: The area under the sensitivity curve (AUSEC) (function sensitivity), the area under the specificity curve (AUSPC) (function specificity), the area under the accuracy curve (AUACC) (function accuracy), and the area under the receiver operating characteristic curve (AUROC) (function roc). The curves can also be visualized using the function plot. Support for partial areas is provided.

Auxiliary code in this package is adapted from the ROCR package. The measures available in this package are not available in the ROCR package or vice versa (except for the AUROC). As for the AUROC, we adapted the ROCR code to increase computational speed (so it can be used more effectively in objective functions). As a result less funtionality is offered (e.g., averaging cross validation runs). Please use the ROCR package for that purposes.

Author(s)

Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]t.be

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
data(churn)

auc(sensitivity(churn$predictions,churn$labels))
auc(specificity(churn$predictions,churn$labels))
auc(accuracy(churn$predictions,churn$labels))
auc(roc(churn$predictions,churn$labels))

plot(sensitivity(churn$predictions,churn$labels))
plot(specificity(churn$predictions,churn$labels))
plot(accuracy(churn$predictions,churn$labels))
plot(roc(churn$predictions,churn$labels))

AUC documentation built on May 29, 2017, 2:14 p.m.