Description Usage Arguments Value Examples
View source: R/MeasureClassifConfusion.R
Based on a 2x2 confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:
"tp"
: True Positives.
"fn"
: False Negatives.
"fp"
: False Positives.
"tn"
: True Negatives.
"tpr"
: True Positive Rate.
"fnr"
: False Negative Rate.
"fpr"
: False Positive Rate.
"tnr"
: True Negative Rate.
"ppv"
: Positive Predictive Value.
"fdr"
: False Discovery Rate.
"for"
: False Omission Rate.
"npv"
: Negative Predictive Value.
"dor"
: Diagnostic Odds Ratio.
"f1"
: F1 Measure.
"precision"
: Alias for "ppv"
.
"recall"
: Alias for "tpr"
.
"sensitivity"
: Alias for "tpr"
.
"specificity"
: Alias for "tnr"
.
If the denominator is 0, the returned score is NA
.
1 | confusion_measures(m, type = NULL)
|
m |
:: |
type |
:: |
(named numeric()
) of confusion measures.
1 2 3 4 | task = tsk("german_credit")
learner = lrn("classif.rpart")
p = learner$train(task)$predict(task)
round(confusion_measures(p$confusion), 2)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.