evaluation: Evaluation of prediction performance

Description Usage Arguments Details Value Author(s) Examples

View source: R/evaluation.R

Description

Evaluation of prediction performance on the OOB set is done using various measure for classification problems.

Usage

1

Arguments

x

actual class labels

y

predicted class labels

plot

logical; if TRUE a discrimination plot and reliability plot are shown for each class

Details

The currently supported evaluation measures include discriminatory measures like log loss, AUC, and PDI, macro-averaged PPV (Precision)/Sensitivity (Recall)/F1-score, accuracy (same as micro-averaged PPV Sensitivity/F1-score), Matthew's Correlation Coefficient (and its micro-averaged analog), and class-specific PPV/Sensitivity/F1-score/MCC.

Value

A list with one element per evaluation measure except for the cs element, which returns a list of class-specific evaluation measures.

Author(s)

Derek Chiu

Examples

1
2
3
4
5
6
7
8
data(hgsc)
class <- factor(attr(hgsc, "class.true"))
set.seed(1)
training.id <- sample(seq_along(class), replace = TRUE)
test.id <- which(!seq_along(class) %in% training.id)
mod <- classification(hgsc[training.id, ], class[training.id], "xgboost")
pred <- prediction(mod, hgsc, class, test.id)
evaluation(class[test.id], pred)

AlineTalhouk/splendid documentation built on Aug. 30, 2018, 7:54 a.m.