Description Author(s) See Also
The goal of this package is primarily to provide an easy way to
obtain common confusion table metrics in a tidy fashion. The inspiration
comes from Max Kuhn's caret
package and associated function
confusionMatrix
, and the continuation of those efforts in the
yardstick
package. Here, practically all dependencies have been
removed except for dplyr, and results are tibbles making for easier
document presentation, as well as the ability to peel off the statistics
desired.
All that is required is a vector of predicted classes and a vector of target classes, as that is typically what we're dealing with in such scenarios, i.e. predictions vs. a target variable. These can be logical, integer/numeric, character, or factor, but the predictions should match the target in an obvious way.
Statistics provided include:
Accuracy and Agreement
- Accuracy, bounds, and related
- Cohen's Kappa
- Corrected Rand
Other Statistics:
- Sensitivity
- Specificity
- Prevalence
- Positive Predictive Value
- Negative Predictive Value
- Detection prevalence
- Balanced Accuracy
- F1
Measures of Agreement/Association:
- Phi
- Yule's
- Peirce's science of the method (Youden's J)
- Jaccard
Maintainer: Michael Clark micl@umich.edu
Useful links:
Report bugs at https://github.com/m-clark/confusionMatrix/issues
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.