evaluation | R Documentation |
Evaluation of prediction performance on the OOB set is done using various measure for classification problems.
evaluation(x, y, plot = FALSE)
x |
true class labels |
y |
predicted class labels |
plot |
logical; if |
The currently supported evaluation measures include discriminatory measures like log loss, AUC, and PDI, macro-averaged PPV (Precision)/Sensitivity (Recall)/F1-score, accuracy (same as micro-averaged PPV Sensitivity/F1-score), Matthew's Correlation Coefficient (and its micro-averaged analog), Kappa, G-mean, and class-specific accuracy/PPV/NPV/Sensitivity/Specificity/F1-score/MCC/Kappa/G-mean.
A list with one element per evaluation measure except for the cs
element, which returns a list of class-specific evaluation measures.
Derek Chiu
data(hgsc)
class <- factor(attr(hgsc, "class.true"))
set.seed(1)
training.id <- sample(seq_along(class), replace = TRUE)
test.id <- which(!seq_along(class) %in% training.id)
mod <- classification(hgsc[training.id, ], class[training.id], "xgboost")
pred <- prediction(mod, hgsc, class, test.id)
evaluation(class[test.id], pred)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.