auc: Area Under the ROC Curve

View source: R/binary_auc.R

aucR Documentation

Area Under the ROC Curve

Description

Measure to compare true observed labels with predicted probabilities in binary classification tasks.

Usage

auc(truth, prob, positive, na_value = NaN, ...)

Arguments

truth

(factor())
True (observed) labels. Must have the exactly same two levels and the same length as response.

prob

(numeric())
Predicted probability for positive class. Must have exactly same length as truth.

positive

(⁠character(1))⁠
Name of the positive class.

na_value

(numeric(1))
Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.

...

(any)
Additional arguments. Currently ignored.

Details

Computes the area under the Receiver Operator Characteristic (ROC) curve. The AUC can be interpreted as the probability that a randomly chosen positive observation has a higher predicted probability than a randomly chosen negative observation.

This measure is undefined if the true values are either all positive or all negative.

Value

Performance value as numeric(1).

Meta Information

  • Type: "binary"

  • Range: [0, 1]

  • Minimize: FALSE

  • Required prediction: prob

References

Youden WJ (1950). “Index for rating diagnostic tests.” Cancer, 3(1), 32–35. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1002/1097-0142(1950)3:1<32::aid-cncr2820030106>3.0.co;2-3")}.

See Also

Other Binary Classification Measures: bbrier(), dor(), fbeta(), fdr(), fn(), fnr(), fomr(), fp(), fpr(), gmean(), gpr(), npv(), ppv(), prauc(), tn(), tnr(), tp(), tpr()

Examples

truth = factor(c("a", "a", "a", "b"))
prob = c(.6, .7, .1, .4)
auc(truth, prob, "a")

mlr3measures documentation built on Sept. 12, 2024, 7:20 a.m.