Summary and plotting functions for threshold independent performance measures for probabilistic classifiers.

This package includes functions to compute the area under the curve (function `auc`

) of selected measures: The area under
the sensitivity curve (AUSEC) (function `sensitivity`

), the area under the specificity curve
(AUSPC) (function `specificity`

), the area under the accuracy curve (AUACC) (function `accuracy`

), and
the area under the receiver operating characteristic curve (AUROC) (function `roc`

). The curves can also be
visualized using the function `plot`

. Support for partial areas is provided.

Auxiliary code in this package is adapted from the `ROCR`

package. The measures available in this package are not available in the
ROCR package or vice versa (except for the AUROC). As for the AUROC, we adapted the `ROCR`

code to increase computational speed
(so it can be used more effectively in objective functions). As a result less funtionality is offered (e.g., averaging cross validation runs).
Please use the `ROCR`

package for that purposes.

Michel Ballings and Dirk Van den Poel, Maintainer: Michel.Ballings@UGent.be

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

`sensitivity`

, `specificity`

, `accuracy`

, `roc`

, `auc`

, `plot`

1 2 3 4 5 6 7 8 9 10 11 | ```
data(churn)
auc(sensitivity(churn$predictions,churn$labels))
auc(specificity(churn$predictions,churn$labels))
auc(accuracy(churn$predictions,churn$labels))
auc(roc(churn$predictions,churn$labels))
plot(sensitivity(churn$predictions,churn$labels))
plot(specificity(churn$predictions,churn$labels))
plot(accuracy(churn$predictions,churn$labels))
plot(roc(churn$predictions,churn$labels))
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.