evaluation_metrics | R Documentation |
Evaluate the performance of a classification model by comparing its predicted labels to the true labels. Various metrics are returned to give an insight on how well the model classifies the observations. This function is added to aid outlier detection evaluation of MCNM and MtM in case that true outliers are known in advance.
evaluation_metrics(true_labels, pred_labels)
true_labels |
An 0-1 or logical vector denoting the true labels. The meaning of 0 and 1 (or TRUE and FALSE) is up to the user. |
pred_labels |
An 0-1 or logical vector denoting the true labels. The meaning of 0 and 1 (or TRUE and FALSE) is up to the user. |
A list with the following slots:
matr |
The confusion matrix built upon true labels and predicted labels. |
TN |
True negative. |
FP |
False positive (type I error). |
FN |
False negative (type II error). |
TP |
True positive. |
TPR |
True positive rate (sensitivy). |
FPR |
False positive rate. |
TNR |
True negative rate (specificity). |
FNR |
False negative rate. |
precision |
Precision or positive predictive value (PPV). |
accuracy |
Accuracy. |
error_rate |
Error rate. |
FDR |
False discovery rate. |
#++++ Inputs are 0-1 vectors ++++#
evaluation_metrics(
true_labels = c(1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1),
pred_labels = c(1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1)
)
#++++ Inputs are logical vectors ++++#
evaluation_metrics(
true_labels = c(TRUE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE),
pred_labels = c(FALSE, FALSE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE)
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.