dx_accuracy | R Documentation |
Calculates the proportion of correct predictions (True Positives + True Negatives) over all cases from a confusion matrix object, providing a measure of the classifier's overall correctness.
dx_accuracy(cm, detail = "full", ...)
cm |
A dx_cm object created by |
detail |
Character specifying the level of detail in the output: "simple" for raw estimate, "full" for detailed estimate including 95% confidence intervals. |
... |
Additional arguments to pass to metric_binomial function, such as
|
Accuracy = \frac{True Positives + True Negatives}{Total Cases}
Accuracy is one of the most intuitive performance measures and it is simply a ratio of correctly predicted observation to the total observations. It's a common starting point for evaluating the performance of a classifier. However, it's not suitable for unbalanced classes due to its tendency to be misleadingly high when the class of interest is underrepresented. For detailed diagnostics, including confidence intervals, specify detail = "full".
Depending on the detail
parameter, returns a numeric value
representing the calculated metric or a data frame/tibble with
detailed diagnostics including confidence intervals and possibly other
metrics relevant to understanding the metric.
dx_cm()
to understand how to create and interact with a
'dx_cm' object.
cm <- dx_cm(
dx_heart_failure$predicted,
dx_heart_failure$predicted,
threshold = 0.3, poslabel = 1
)
simple_accuracy <- dx_accuracy(cm, detail = "simple")
detailed_accuracy <- dx_accuracy(cm)
print(simple_accuracy)
print(detailed_accuracy)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.