equal_opportunity | R Documentation |
Equal opportunity is satisfied when a model's predictions have the same true positive and false negative rates across protected groups. A value of 0 indicates parity across groups.
equal_opportunity()
is calculated as the difference between the largest
and smallest value of sens()
across groups.
Equal opportunity is sometimes referred to as equality of opportunity.
See the "Measuring Disparity" section for details on implementation.
equal_opportunity(by)
by |
The column identifier for the sensitive feature. This should be an unquoted column name referring to a column in the un-preprocessed data. |
This function outputs a yardstick fairness metric function. Given a
grouping variable by
, equal_opportunity()
will return a yardstick metric
function that is associated with the data-variable grouping by
and a
post-processor. The outputted function will first generate a set
of sens metric values by group before summarizing across
groups using the post-processing function.
The outputted function only has a data frame method and is intended to be used as part of a metric set.
By default, this function takes the difference in range of sens
.estimate
s across groups. That is, the maximum pair-wise disparity between
groups is the return value of equal_opportunity()
's .estimate
.
For finer control of group treatment, construct a context-aware fairness
metric with the new_groupwise_metric()
function by passing a custom aggregate
function:
# the actual default `aggregate` is: diff_range <- function(x, ...) {diff(range(x$.estimate))} equal_opportunity_2 <- new_groupwise_metric( fn = sens, name = "equal_opportunity_2", aggregate = diff_range )
In aggregate()
, x
is the metric_set()
output with sens values
for each group, and ...
gives additional arguments (such as a grouping
level to refer to as the "baseline") to pass to the function outputted
by equal_opportunity_2()
for context.
Hardt, M., Price, E., & Srebro, N. (2016). "Equality of opportunity in supervised learning". Advances in neural information processing systems, 29.
Verma, S., & Rubin, J. (2018). "Fairness definitions explained". In Proceedings of the international workshop on software fairness (pp. 1-7).
Bird, S., DudÃk, M., Edgar, R., Horn, B., Lutz, R., Milan, V., ... & Walker, K. (2020). "Fairlearn: A toolkit for assessing and improving fairness in AI". Microsoft, Tech. Rep. MSR-TR-2020-32.
Other fairness metrics:
demographic_parity()
,
equalized_odds()
library(dplyr)
data(hpc_cv)
head(hpc_cv)
# evaluate `equal_opportunity()` by Resample
m_set <- metric_set(equal_opportunity(Resample))
# use output like any other metric set
hpc_cv %>%
m_set(truth = obs, estimate = pred)
# can mix fairness metrics and regular metrics
m_set_2 <- metric_set(sens, equal_opportunity(Resample))
hpc_cv %>%
m_set_2(truth = obs, estimate = pred)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.