accuracy: Classification accuracy

View source: R/deepMetrics.r

accuracyR Documentation

Classification accuracy

Description

Classification accuracy

Usage

accuracy(
  actuals,
  preds,
  type = c("standard", "misclass", "tpr", "tnr", "ppv", "npv", "fnr", "fpr", "fdr",
    "for", "lrplus", "lrminus", "dor", "ts", "f1", "mcc", "fm", "kappa"),
  compound = FALSE,
  na.rm = FALSE
)

Arguments

actuals

Numeric data (vector, array, matrix, data frame or list) of ground truth (actual) values.

preds

Numeric data (vector, array, matrix, data frame or list) of predicted values.

type

Denotes the calculated type of accuracy derivative from confusion matrix.

compound

A logical value indicating whether the metric score is calculated for each label (default FALSE) or across all labels (TRUE).

na.rm

A logical value indicating whether actual and prediction pairs with at least one NA value should be ignored.

Details

The following accuracy types are implemented:

  • Standard: Number of correct predictions / Total number of predictions

  • Misclassification error: Number of incorrect predictions / Total number of predictions

  • TPR (True Positive Rate), also sensitivity, recall or hit rate: TP / (TP + FN)

  • TNR (True Negative Rate), also specificity or selectivity: TN / (TN + FP)

  • PPV (Positive Predictive Value), also precision: TP / (TP + FP)

  • NPV (Negative Predictive Value): TN / (TN + FN)

  • FNR (False Negative Rate), also miss rate: FN / (FN + TP)

  • FPR (False Positive rate), also fall-out: FP / (FP + TN)

  • FDR (False Discovery Rate): FP / (FP + TP)

  • FOR (False Omission Rate): FN / (FN + TN)

  • LR+ (Positive Likelihood Ratio): TPR / FPR

  • LR- (Negative Likelihood Ratio): FNR / TNR

  • DOR (Diagnostics Odds Ratio): LR+ / LR-

  • TS (Threat Score), also critical succes index: TP (TP + FN + FP)

  • F1 score: 2 * Precision * Recall / (Precision + Recall)

  • MCC (Matthews Correlation Coefficient), also phi coefficient: TP * TN - FP * FN / \sqrt((TP + FP) * (TP + FN) * (TN + FP) * (TN + FN))

  • FM (Fowlkes-Mallows index): \sqrt((TP / (TP + FP)) * (TP / (TP * FN)))

  • Kappa statistics: (p0 - pe) / (1 - pe)

Standard accuracy and misclassification error are mainly used for single-label classification problems, while the others can also be used for multi-label classification problems.

Value

The type-specific accuracy score of a classification problem.

See Also

Other Metrics: cross_entropy(), dice(), entropy(), erf(), erfc(), erfcinv(), erfinv(), gini_impurity(), huber_loss(), iou(), log_cosh_loss(), mae(), mape(), mse(), msle(), quantile_loss(), rmse(), rmsle(), rmspe(), sse(), stderror(), vc(), wape(), wmape()

Examples

accuracy(actuals = c(rep("A", 6), rep("B", 6), rep("C", 6)),
         preds = c(rep("A", 4), "B", "C", rep("B", 5), "A", rep("C", 6)),
         type = "standard")

# preds does not cover all categories of actuals
accuracy(actuals = c(rep("A", 6), rep("B", 6), rep("C", 6)),
         preds = c(rep("A", 10), rep("C", 8)),
         type = "tpr")


stschn/deepANN documentation built on June 25, 2024, 7:27 a.m.