f_meas | R Documentation |
These functions calculate the f_meas()
of a measurement system for
finding relevant documents compared to reference results
(the truth regarding relevance). Highly related functions are recall()
and precision()
.
f_meas(data, ...)
## S3 method for class 'data.frame'
f_meas(
data,
truth,
estimate,
beta = 1,
estimator = NULL,
na_rm = TRUE,
case_weights = NULL,
event_level = yardstick_event_level(),
...
)
f_meas_vec(
truth,
estimate,
beta = 1,
estimator = NULL,
na_rm = TRUE,
case_weights = NULL,
event_level = yardstick_event_level(),
...
)
data |
Either a |
... |
Not currently used. |
truth |
The column identifier for the true class results
(that is a |
estimate |
The column identifier for the predicted class
results (that is also |
beta |
A numeric value used to weight precision and recall. A value of 1 is traditionally used and corresponds to the harmonic mean of the two values but other values weight recall beta times more important than precision. |
estimator |
One of: |
na_rm |
A |
case_weights |
The optional column identifier for case weights.
This should be an unquoted column name that evaluates to a numeric column
in |
event_level |
A single string. Either |
The measure "F" is a combination of precision and recall (see below).
A tibble
with columns .metric
, .estimator
,
and .estimate
and 1 row of values.
For grouped data frames, the number of rows returned will be the same as the number of groups.
For f_meas_vec()
, a single numeric
value (or NA
).
There is no common convention on which factor level should
automatically be considered the "event" or "positive" result
when computing binary classification metrics. In yardstick
, the default
is to use the first level. To alter this, change the argument
event_level
to "second"
to consider the last level of the factor the
level of interest. For multiclass extensions involving one-vs-all
comparisons (such as macro averaging), this option is ignored and
the "one" level is always the relevant result.
Macro, micro, and macro-weighted averaging is available for this metric.
The default is to select macro averaging if a truth
factor with more
than 2 levels is provided. Otherwise, a standard binary calculation is done.
See vignette("multiclass", "yardstick")
for more information.
Suppose a 2x2 table with notation:
Reference | ||
Predicted | Relevant | Irrelevant |
Relevant | A | B |
Irrelevant | C | D |
The formulas used here are:
recall = A/(A+C)
precision = A/(A+B)
F_{meas} = (1+\beta^2) * precision * recall/((\beta^2 * precision)+recall)
See the references for discussions of the statistics.
Max Kuhn
Buckland, M., & Gey, F. (1994). The relationship between Recall and Precision. Journal of the American Society for Information Science, 45(1), 12-19.
Powers, D. (2007). Evaluation: From Precision, Recall and F Factor to ROC, Informedness, Markedness and Correlation. Technical Report SIE-07-001, Flinders University
Other class metrics:
accuracy()
,
bal_accuracy()
,
detection_prevalence()
,
j_index()
,
kap()
,
mcc()
,
npv()
,
ppv()
,
precision()
,
recall()
,
sens()
,
spec()
Other relevance metrics:
precision()
,
recall()
# Two class
data("two_class_example")
f_meas(two_class_example, truth, predicted)
# Multiclass
library(dplyr)
data(hpc_cv)
hpc_cv %>%
filter(Resample == "Fold01") %>%
f_meas(obs, pred)
# Groups are respected
hpc_cv %>%
group_by(Resample) %>%
f_meas(obs, pred)
# Weighted macro averaging
hpc_cv %>%
group_by(Resample) %>%
f_meas(obs, pred, estimator = "macro_weighted")
# Vector version
f_meas_vec(
two_class_example$truth,
two_class_example$predicted
)
# Making Class2 the "relevant" level
f_meas_vec(
two_class_example$truth,
two_class_example$predicted,
event_level = "second"
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.