ROC-package: Compute performance measures for two-class classifiers for...

ROC-packageR Documentation

Compute performance measures for two-class classifiers for Receiver Operating Characteristics analysis.

Description

This can compute various performance measures for two-class classifiers. The formost important is the Receiver Operating Characteristc (ROC) curve, which shows the trade-off between false positives and false negatives when an implicit threshold is changed. Further, it computes the ROC convex hull, which has relevance to minimum-cost detection thresholds and optimal score-to-likelihood ratio transformations. Standard plotting routines include ROC, DetectionErorr Trade-off (DET), Applied Proability of Error (APE), Normalized Bayes Error rate (NBE), ideal score-to-Log-Likelihood-ratio (LLR), double-density and Tippet plots. The data can be stord in a data frame with records for indicidual trials, so that additional conditions for each of the trials can represented easily. This make per-condition analysis very easy. Summary statistics include the Equal Error Rate (convex hull), the Cost of the Log Likelihood Ratio (Cllr), and minimum Cllr. Score data can be numeric continuous of discrete, or in ordered factors.

Details

Package: ROC
Type: Package
Version: 0.10
Date: 2014-9-1
License: GPL-2
LazyLoad: yes

The package provides tools for computing evaluation metrics on standard trial sets in speaker recognition. With read.tnt target and non-taregt scores can be read into an cst structure (collection of supervised trials). Then, with roc the performance measures are computed and a structure is prepared for plotting, and displays basic performance measures. det.plot will make a DET plot.

Author(s)

David A. van Leeuwen.

Maintainer: <david.vanleeuwen@gmail.com>

References

  1. Alvin Martin et al, “The DET Curve in Assessment of Detection Task Performance,” Proc. Interspeech, 1895–1898 (1997).

  2. Niko Br\"ummer and Johan du Preez, “Application-independent evaluation of speaker detection,” Computer, Speech and Language 20, 230–275, (2006).

  3. David van Leeuwen and Niko Br\"ummer, “An Introduction to Application-Independent Evaluation of Speaker Recognition System,” LNCS 4343 (2007).

  4. 4 Foster Provost and Tom Fawcett, “Analysis and Visualization of Classifier Performance: Comparison under Imprecise Class and Cost Distributions,” Third International Conference on Knowledge Discovery and Data Mining (1997).

See Also

cst.tnt, roc, roc.plot, det.plot.

Examples

## RU submission to EVALITA speaker recognition applications track
data(ru.2009)
## inspect details of data frame
ru.2009[1,]
## look at TC6 train condition and TS2 test condition (easiest task:-)
x <- subset(ru.2009, mcond=="TC6" & tcond=="TS2")
## compute det statistics
r <- roc(x)
summary(r)
## and plot results
plot(r, main="RU TC6 TS1 primary submission EVALITA 2009")
det.plot(r, main="RU TC6 TS1 primary submission EVALITA 2009")

davidavdav/ROC documentation built on Sept. 8, 2023, 2:39 p.m.