| accuracy | R Documentation |
Classification accuracy
accuracy(
actuals,
preds,
type = c("standard", "misclass", "tpr", "tnr", "ppv", "npv", "fnr", "fpr", "fdr",
"for", "lrplus", "lrminus", "dor", "ts", "f1", "mcc", "fm", "kappa"),
compound = FALSE,
na.rm = FALSE
)
actuals |
Numeric data (vector, array, matrix, data frame or list) of ground truth (actual) values. |
preds |
Numeric data (vector, array, matrix, data frame or list) of predicted values. |
type |
Denotes the calculated type of accuracy derivative from confusion matrix. |
compound |
A logical value indicating whether the metric score is calculated for each label (default |
na.rm |
A logical value indicating whether actual and prediction pairs with at least one NA value should be ignored. |
The following accuracy types are implemented:
Standard: Number of correct predictions / Total number of predictions
Misclassification error: Number of incorrect predictions / Total number of predictions
TPR (True Positive Rate), also sensitivity, recall or hit rate: TP / (TP + FN)
TNR (True Negative Rate), also specificity or selectivity: TN / (TN + FP)
PPV (Positive Predictive Value), also precision: TP / (TP + FP)
NPV (Negative Predictive Value): TN / (TN + FN)
FNR (False Negative Rate), also miss rate: FN / (FN + TP)
FPR (False Positive rate), also fall-out: FP / (FP + TN)
FDR (False Discovery Rate): FP / (FP + TP)
FOR (False Omission Rate): FN / (FN + TN)
LR+ (Positive Likelihood Ratio): TPR / FPR
LR- (Negative Likelihood Ratio): FNR / TNR
DOR (Diagnostics Odds Ratio): LR+ / LR-
TS (Threat Score), also critical succes index: TP (TP + FN + FP)
F1 score: 2 * Precision * Recall / (Precision + Recall)
MCC (Matthews Correlation Coefficient), also phi coefficient: TP * TN - FP * FN / \sqrt((TP + FP) * (TP + FN) * (TN + FP) * (TN + FN))
FM (Fowlkes-Mallows index): \sqrt((TP / (TP + FP)) * (TP / (TP * FN)))
Kappa statistics: (p0 - pe) / (1 - pe)
Standard accuracy and misclassification error are mainly used for single-label classification problems, while the others can also be used for multi-label classification problems.
The type-specific accuracy score of a classification problem.
Other Metrics:
cross_entropy(),
dice(),
entropy(),
erf(),
erfc(),
erfcinv(),
erfinv(),
gini_impurity(),
huber_loss(),
iou(),
log_cosh_loss(),
mae(),
mape(),
mse(),
msle(),
quantile_loss(),
rmse(),
rmsle(),
rmspe(),
sse(),
stderror(),
vc(),
wape(),
wmape()
accuracy(actuals = c(rep("A", 6), rep("B", 6), rep("C", 6)),
preds = c(rep("A", 4), "B", "C", rep("B", 5), "A", rep("C", 6)),
type = "standard")
# preds does not cover all categories of actuals
accuracy(actuals = c(rep("A", 6), rep("B", 6), rep("C", 6)),
preds = c(rep("A", 10), rep("C", 8)),
type = "tpr")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.