fbeta | R Documentation |
Measure to compare true observed labels with predicted labels in binary classification tasks.
fbeta(truth, response, positive, beta = 1, na_value = NaN, ...)
truth |
( |
response |
( |
positive |
( |
beta |
( |
na_value |
( |
... |
( |
With P
as precision()
and R
as recall()
, the F-beta Score is defined as
(1 + \beta^2) \frac{P \cdot R}{(\beta^2 P) + R}.
It measures the effectiveness of retrieval with respect to a user who attaches \beta
times
as much importance to recall as precision.
For \beta = 1
, this measure is called "F1" score.
This measure is undefined if precision or recall is undefined, i.e. TP + FP = 0 or TP + FN = 0.
Performance value as numeric(1)
.
Type: "binary"
Range: [0, 1]
Minimize: FALSE
Required prediction: response
Rijsbergen, Van CJ (1979). Information Retrieval, 2nd edition. Butterworth-Heinemann, Newton, MA, USA. ISBN 408709294.
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345–359. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/978-3-540-31865-1_25")}.
Other Binary Classification Measures:
auc()
,
bbrier()
,
dor()
,
fdr()
,
fn()
,
fnr()
,
fomr()
,
fp()
,
fpr()
,
gmean()
,
gpr()
,
npv()
,
ppv()
,
prauc()
,
tn()
,
tnr()
,
tp()
,
tpr()
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
fbeta(truth, response, positive = "a")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.