tpr | R Documentation |
Measure to compare true observed labels with predicted labels in binary classification tasks.
tpr(truth, response, positive, na_value = NaN, ...)
recall(truth, response, positive, na_value = NaN, ...)
sensitivity(truth, response, positive, na_value = NaN, ...)
truth |
( |
response |
( |
positive |
( |
na_value |
( |
... |
( |
The True Positive Rate is defined as
\frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FN}}.
This is also know as "recall", "sensitivity", or "probability of detection".
This measure is undefined if TP + FN = 0.
Performance value as numeric(1)
.
Type: "binary"
Range: [0, 1]
Minimize: FALSE
Required prediction: response
https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345–359. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/978-3-540-31865-1_25")}.
Other Binary Classification Measures:
auc()
,
bbrier()
,
dor()
,
fbeta()
,
fdr()
,
fn()
,
fnr()
,
fomr()
,
fp()
,
fpr()
,
gmean()
,
gpr()
,
npv()
,
ppv()
,
prauc()
,
tn()
,
tnr()
,
tp()
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
tpr(truth, response, positive = "a")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.