| sensitivity | R Documentation |
Given the observed and predicted values of categorical data (of any number of classes) computes the sensitivity (also known as recall), the metric that evaluates a models ability to predict true positives of each available category.
sensitivity(observed, predicted, positive_class = NULL, remove_na = TRUE)
observed |
( |
predicted |
( |
positive_class |
( |
remove_na |
( |
Given the following binary confusion matrix:
Sensitivity is computed as:
For binary data a single value is returned, for more than 2 categories a vector of sensitivities is returned, one per each category.
Other categorical_metrics:
accuracy(),
brier_score(),
categorical_summary(),
confusion_matrix(),
f1_score(),
kappa_coeff(),
math_mode(),
matthews_coeff(),
pccc(),
pcic(),
pr_auc(),
precision(),
recall(),
roc_auc(),
specificity()
## Not run:
sensitivity(factor(c("a", "b")), factor(c("a", "b")))
sensitivity(factor(c("a", "b")), factor(c("b", "a")))
sensitivity(factor(c("a", "b")), factor(c("b", "b")))
sensitivity(factor(c(TRUE, FALSE)), factor(c(FALSE, TRUE)))
sensitivity(factor(c("a", "b", "a")), factor(c("b", "a", "c")))
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.