# confusion_measures: Calculate Confusion Measures In mllg/mlr3: Machine Learning in R - Next Generation

## Description

Based on a 2x2 confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

• `"tp"`: True Positives.

• `"fn"`: False Negatives.

• `"fp"`: False Positives.

• `"tn"`: True Negatives.

• `"tpr"`: True Positive Rate.

• `"fnr"`: False Negative Rate.

• `"fpr"`: False Positive Rate.

• `"tnr"`: True Negative Rate.

• `"ppv"`: Positive Predictive Value.

• `"fdr"`: False Discovery Rate.

• `"for"`: False Omission Rate.

• `"npv"`: Negative Predictive Value.

• `"dor"`: Diagnostic Odds Ratio.

• `"f1"`: F1 Measure.

• `"precision"`: Alias for `"ppv"`.

• `"recall"`: Alias for `"tpr"`.

• `"sensitivity"`: Alias for `"tpr"`.

• `"specificity"`: Alias for `"tnr"`.

If the denominator is 0, the returned score is `NA`.

## Usage

 `1` ```confusion_measures(m, type = NULL) ```

## Arguments

 `m` :: `matrix()` Confusion matrix, e.g. as returned by field `confusion` of PredictionClassif. Truth is in columns, predicted response is in rows. `type` :: `character()` Selects the measure to use. See description for possible values.

## Value

(named `numeric()`) of confusion measures.

## Examples

 ```1 2 3 4``` ```task = tsk("german_credit") learner = lrn("classif.rpart") p = learner\$train(task)\$predict(task) round(confusion_measures(p\$confusion), 2) ```

mllg/mlr3 documentation built on Sept. 27, 2019, 9:38 a.m.