In a previous article we discussed why it's a good idea to prefer probability models to "hard" classification models, and why you should delay setting "hard" classification rules as long as possible. But decisions have to be made, and eventually you will have to set that threshold. How do you do it?

A good threshold balances classifier precision/recall or sensitivity/specificity in a way that best meets the project or business needs. One way to quantify and think about this balance is the notion of model utility, which maps the performance of a model to some notion of the value achieved by that performance. In this article, we demonstrate the use of sigr::model_utility() to estimate model utility and pick model thresholds for classification problems.

The Example Problem

Let's imagine that we have a sales application, and we are trying to decide which prospects to target. We want to build a model that predicts the probability of conversion, based on certain prospect characteristics.

For our example, suppose we have a population where true positives are relatively rare (about 1% of the population), and we have trained a model that returns predictions like this on an evaluation set, d:

#
# data generator functions
#
set.seed(2020)

y_example <- function(n, prevalence = 0.5) {
  data.frame(
    y = sample(
      c(TRUE, FALSE), 
      size = n, 
      replace = TRUE,
      prob = c(prevalence, 1 - prevalence))
  )
}

beta_variable <- function(
  d, 
  shape1_pos, shape2_pos, 
  shape1_neg, shape2_neg) {
  score <- numeric(nrow(d))
  score[d$y] <- rbeta(sum(d$y), shape1 = shape1_pos, shape2 = shape2_pos)
  score[!d$y] <- rbeta(sum(!d$y), shape1 = shape1_neg, shape2 = shape2_neg)
  score
}

#
# generate the example. This looks complicated, but we are trying to generate
# predictions consistent with a calibrated model (like logistic regression)
#
prevalence <- 0.01

a_pos <- 6
b_pos <- 300
a_neg <- 3

p_odds_ratio <- prevalence / (1-prevalence)
pos_mean <- a_pos / (a_pos + b_pos)
b_neg <- a_neg * (  1 / (p_odds_ratio * (1 - pos_mean)) - 1)
stopifnot(b_neg >= 0)
neg_mean <- a_neg / (a_neg + b_neg)
check_p <- prevalence * pos_mean + (1 - prevalence) * neg_mean
stopifnot(abs(prevalence - check_p) < 1e-5)

d <- y_example(10000, prevalence = prevalence)
d$predicted_probability <- beta_variable(
  d,
  shape1_pos = a_pos, 
  shape2_pos = b_pos,
  shape1_neg = a_neg,
  shape2_neg = b_neg)

# change the column name to better match the example
colnames(d) <- c("converted", "predicted_probability")
library(WVPlots)
DoubleDensityPlot(d, "predicted_probability", "converted",
                  title = "Model score distribution by true outcome")

(For the data generation code, see the source code for this article).

We want to contact prospects with a higher probability of converting; that is, prospects who score above a certain threshold. How how do we set that threshold? Note that the usual default threshold of 0.5 will fail spectacularly with this model, since both positive and negative examples score quite low. This is not uncommon in low positive prevalence situations.

Picking a threshold based on model performance

We might try picking the threshold by looking at model performance. Let's look at precision (the probability that a sample scored as "true" really is true) and recall (the fraction of conversions found) as a function of threshold.

ThresholdPlot(d, "predicted_probability", "converted",
              title = "Precision and Recall as a function of threshold",
              metrics = c("precision", "recall"))

For the sake of efficiency, we'd like to reduce the number of unsuccessful calls as much as possible, which implies we'd like a decision policy with high precision. The best precision we can get with this model occurs at a threshold of around 0.034 or 0.035 or so, and at that point the recall is quite low. Is that classifier good enough for our business needs? This can be hard to tell, as the business needs are likely in dollars, rather than in false positive/false negatives, and so on.

Picking a threshold based on model utility

The way to decide if a possible threshold meets business needs is to attach utilities to the decisions -- right and wrong -- that the model will make. To do this, we need to assign costs and rewards to contacting (or not contacting) a prospect.

Suppose every contact we make costs \$5, and every successful conversion brings in \$100 in net revenue. For demonstration purposes, we will also add a small penalty for every missed prospect (one penny), and a small reward for every prospect we correctly ignore (again, one penny). Let's add those costs to our model frame. (You can leave the last two values at zero, if you prefer).

d$true_positive_value <- 100 - 5   # net revenue - cost
d$false_positive_value <- -5       # the cost of a call
d$true_negative_value <-  0.01     # a small reward for getting them right
d$false_negative_value <- -0.01    # a small penalty for having missed them

As we vary the decision threshold, we vary the number of prospects contacted, and the number of successful conversions. We can use the costs and rewards above to calculate the total value (or the total utility) realized by a decision policy over the evaluation set. The threshold that realizes the highest utility is the best threshold to use with a given model (for now, we ignore also modeling uncertainty).

The sigr::model_utility() function can calculate all the costs for various thresholds.

library(sigr)
values <- model_utility(d, 
                        model_name = 'predicted_probability', 
                        outcome_name = 'converted')

The model_utility() function returns a data frame with the following columns (with one example row):

t(values[1, ])

Each row of values returns the appropriate counts and values for a classifier rule that labels cases as TRUE when predicted_probability >= threshold. Now we can determine the total value returned by the model on our evaluation set, as a function of threshold.

library(ggplot2)

t_bound <- 0.015
vhigh <- subset(values, threshold >= t_bound) # just look at thresholds >= t_bound
p <- ggplot(vhigh, aes(x = threshold, y = total_value)) + 
  geom_line() + 
  ggtitle("Estimated model utility as a function of threshold")

# find the maximum utility
max_ix <- which.max(vhigh$total_value)
best_threshold <- vhigh$threshold[max_ix]

p + geom_vline(xintercept = best_threshold, linetype=3)

# print out some information about the optimum
(best_info <- vhigh[max_ix, c("threshold", "count_taken", 
                             "fraction_taken", "total_value")]) %.>%
  knitr::kable(.)

The graph shows that a policy that contacts too many prospects (lower threshold) will lose money (negative utility), while contacting too few prospects (higher threshold) goes to zero utility. The best tradeoff with this model is a threshold of r format(best_threshold, digits=2), which translates to contacting the top r 100*vhigh$fraction_taken[max_ix]% of the prospects.

If we go back to the precision/recall graph, we see that this optimal threshold actually doesn't give us the policy with the highest precision, but it makes up for that by making more successful contacts. Picking the threshold from the utility calculation is easier and more reliable than just eyeballing a threshold from the abstract recall/precision graph.

Comparing to best possible performance

We can compare this model's optimal policy to the best possible performance on this data (contacting all of the successful prospects, and only them), simply by positing a "wizard model" that scores true cases as 1.0 and false cases as 0.0. In this case any threshold between zero and one (say, 0.5) will perform the same. IFor this situation, the model_utility function will return three rows: one for the policy of contacting everyone (threshold = 0), one for threshold = 0.5, and one for the policy of contacting no one (marked as threshold = NA).

# add a column for the wizard model
d$wizard <- with(d, ifelse(converted, 1.0, 0.0))

# calculate the wizard's model utilities
wizard_values <- model_utility(d, 
                               model_name = 'wizard',
                               outcome_name = 'converted')
wizard_values[, c("threshold", "count_taken", 
                  "fraction_taken", "total_value")] %.>%
  knitr::kable(.)

# amount of potential value in this population
potential = wizard_values$total_value[2]

# what we realized
realized = best_info$total_value

# fraction realized
frac_realized = realized/potential

The threshold = 0.5 row tells us that the potential value of this population sample is \$r format(potential), of which our model realized \$r format(realized), or r format(100*frac_realized, digits=3)% of the total available value.

Advanced: Variable Utilities

In the above example, we assigned constant utilities and costs to the data set. Here, instead of assuming a fixed revenue of \$100 dollars on conversions, we'll assume that projected potential revenue varies by prospect (with an average of \$100). This potential value could be contract sizes that we are bidding on, which varies from prospect to prospect.

# replace the true positive value with a varying value
d$true_positive_value <- rnorm(nrow(d), mean = 100, sd = 25) - 5   # revenue - cost

# recalculate the values and potential
values <- model_utility(d, 
                        model_name = 'predicted_probability', 
                        outcome_name = 'converted')

wizard_values <- model_utility(d, 
                               model_name = 'wizard',
                               outcome_name = 'converted')

Here we replot the utility curve.

vhigh <- subset(values, threshold >= t_bound) # just look at thresholds >= t_bound
p <- ggplot(vhigh, aes(x = threshold, y = total_value)) + 
  geom_line() + 
  ggtitle("Estimated model utility as a function of threshold, variable utility case")

# find the maximum utility
max_ix <- which.max(vhigh$total_value)
best_threshold <- vhigh$threshold[max_ix]

p + geom_vline(xintercept = best_threshold, linetype=3)

# print out some information about the optimum
vhigh[max_ix, c("threshold", "count_taken", "fraction_taken", "total_value")] %.>%
  knitr::kable(.)

# compare to potential
wizard_values[2, c("threshold", "count_taken", "fraction_taken", "total_value")] %.>%
  knitr::kable(.)

Reporting Uncertainty

Up until now, we have returned a point utility estimate. It's better to use a distribution, or at least an estimate of uncertainty. In our next note, we will deal with estimating and reporting uncertainty.

Conclusion

We have shown how to select a classifier threshold directly from policy utility, using sigr::model_utility(). Thinking in terms of utility is simple, because it's already the language of your business partners.



WinVector/sigr documentation built on Aug. 29, 2023, 3:57 a.m.