evaluation: Evaluation of prediction performance

View source: R/evaluation.R

evaluationR Documentation

Evaluation of prediction performance

Description

Evaluation of prediction performance on the OOB set is done using various measure for classification problems.

Usage

evaluation(x, y, plot = FALSE)

Arguments

x

true class labels

y

predicted class labels

plot

logical; if TRUE a discrimination plot and reliability plot are shown for each class

Details

The currently supported evaluation measures include discriminatory measures like log loss, AUC, and PDI, macro-averaged PPV (Precision)/Sensitivity (Recall)/F1-score, accuracy (same as micro-averaged PPV Sensitivity/F1-score), Matthew's Correlation Coefficient (and its micro-averaged analog), Kappa, G-mean, and class-specific accuracy/PPV/NPV/Sensitivity/Specificity/F1-score/MCC/Kappa/G-mean.

Value

A list with one element per evaluation measure except for the cs element, which returns a list of class-specific evaluation measures.

Author(s)

Derek Chiu

Examples

data(hgsc)
class <- factor(attr(hgsc, "class.true"))
set.seed(1)
training.id <- sample(seq_along(class), replace = TRUE)
test.id <- which(!seq_along(class) %in% training.id)
mod <- classification(hgsc[training.id, ], class[training.id], "xgboost")
pred <- prediction(mod, hgsc, class, test.id)
evaluation(class[test.id], pred)

AlineTalhouk/splendid documentation built on Feb. 23, 2024, 9:37 p.m.