dx_accuracy: Calculate Accuracy

View source: R/dx_metrics.R

dx_accuracyR Documentation

Calculate Accuracy

Description

Calculates the proportion of correct predictions (True Positives + True Negatives) over all cases from a confusion matrix object, providing a measure of the classifier's overall correctness.

Usage

dx_accuracy(cm, detail = "full", ...)

Arguments

cm

A dx_cm object created by dx_cm().

detail

Character specifying the level of detail in the output: "simple" for raw estimate, "full" for detailed estimate including 95% confidence intervals.

...

Additional arguments to pass to metric_binomial function, such as citype for type of confidence interval method.

Details

Accuracy = \frac{True Positives + True Negatives}{Total Cases}

Accuracy is one of the most intuitive performance measures and it is simply a ratio of correctly predicted observation to the total observations. It's a common starting point for evaluating the performance of a classifier. However, it's not suitable for unbalanced classes due to its tendency to be misleadingly high when the class of interest is underrepresented. For detailed diagnostics, including confidence intervals, specify detail = "full".

Value

Depending on the detail parameter, returns a numeric value representing the calculated metric or a data frame/tibble with detailed diagnostics including confidence intervals and possibly other metrics relevant to understanding the metric.

See Also

dx_cm() to understand how to create and interact with a 'dx_cm' object.

Examples

cm <- dx_cm(
  dx_heart_failure$predicted,
  dx_heart_failure$predicted,
  threshold = 0.3, poslabel = 1
)
simple_accuracy <- dx_accuracy(cm, detail = "simple")
detailed_accuracy <- dx_accuracy(cm)
print(simple_accuracy)
print(detailed_accuracy)

overdodactyl/diagnosticSummary documentation built on Jan. 28, 2024, 10:07 a.m.