ROC graphs, sensitivity/specificity curves, lift charts, and precision/recall plots are popular examples of trade-off visualizations for specific pairs of performance measures. ROCR is a flexible tool for creating cutoff-parameterized 2D performance curves by freely combining two from over 25 performance measures (new performance measures can be added using a standard interface). Curves from different cross-validation or bootstrapping runs can be averaged by different methods, and standard deviations, standard errors or box plots can be used to visualize the variability across the runs. The parameterization can be visualized by printing cutoff values at the corresponding curve positions, or by coloring the curve according to cutoff. All components of a performance plot can be quickly adjusted using a flexible parameter dispatching mechanism. Despite its flexibility, ROCR is easy to use, with only three commands and reasonable default values for all optional parameters.
|Author||Tobias Sing, Oliver Sander, Niko Beerenwinkel, Thomas Lengauer|
|Date of publication||2015-03-26 17:12:17|
|Maintainer||Tobias Sing <email@example.com>|
|License||GPL (>= 2)|
performance: Function to create performance objects
performance-class: Class "performance"
plot-methods: Plot method for performance objects
prediction: Function to create prediction objects
prediction-class: Class "prediction"
ROCR.hiv: Data set: Support vector machines and neural networks applied...
ROCR.simple: Data set: Simple artificial prediction data for use with ROCR
ROCR.xval: Data set: Artificial cross-validation data for use with ROCR