evaluation_metrics: Binary Classification Evaluation

View source: R/utils.R

evaluation_metricsR Documentation

Binary Classification Evaluation

Description

Evaluate the performance of a classification model by comparing its predicted labels to the true labels. Various metrics are returned to give an insight on how well the model classifies the observations. This function is added to aid outlier detection evaluation of MCNM and MtM in case that true outliers are known in advance.

Usage

evaluation_metrics(true_labels, pred_labels)

Arguments

true_labels

An 0-1 or logical vector denoting the true labels. The meaning of 0 and 1 (or TRUE and FALSE) is up to the user.

pred_labels

An 0-1 or logical vector denoting the true labels. The meaning of 0 and 1 (or TRUE and FALSE) is up to the user.

Value

A list with the following slots:

matr

The confusion matrix built upon true labels and predicted labels.

TN

True negative.

FP

False positive (type I error).

FN

False negative (type II error).

TP

True positive.

TPR

True positive rate (sensitivy).

FPR

False positive rate.

TNR

True negative rate (specificity).

FNR

False negative rate.

precision

Precision or positive predictive value (PPV).

accuracy

Accuracy.

error_rate

Error rate.

FDR

False discovery rate.

Examples


#++++ Inputs are 0-1 vectors ++++#

evaluation_metrics(
  true_labels = c(1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1),
  pred_labels = c(1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1)
)

#++++ Inputs are logical vectors ++++#

evaluation_metrics(
  true_labels = c(TRUE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE),
  pred_labels = c(FALSE, FALSE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE)
)


MixtureMissing documentation built on June 24, 2024, 5:17 p.m.