Description Usage Arguments Value Examples
Classification evaluation scores F1, Cohen's kappa, Krippendorff's alpha to compare two label vectors, prediction and truth. Micro average: TP, FP, FN over all category decisions first, then F1 Macro average: F1 over each individual categories first, then average
1 | tmca_fscore(prediction, truth, positive_class = NULL, evaluate_irr = TRUE)
|
prediction |
vector (factor) of predicted labels |
truth |
vector (factor) of true labels |
positive_class |
label (level) of positive class (if not given, the minority class in true labels is assumed as positive) |
evaluate_irr |
compute alpha and kappa agreement statistics (requires irr package) |
Evaluation metrics: Precision, Recall, Specificity, Accuracy, F1-score, Alpha, Kappa
1 2 3 | truth <- factor(c("P", "N", "N", "N"))
prediction <- factor(c("P", "P", "P", "N"))
tmca_fscore(prediction, truth, positive_class = "P")
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.