uplift: Evaluate uplift for different (binary) classification models

upliftR Documentation

Evaluate uplift for different (binary) classification models

Description

Evaluate uplift for different (binary) classification models

Usage

uplift(
  dataset,
  pred,
  rvar,
  lev = "",
  tvar,
  tlev = "",
  qnt = 10,
  cost = 1,
  margin = 2,
  scale = 1,
  train = "All",
  data_filter = "",
  arr = "",
  rows = NULL,
  envir = parent.frame()
)

Arguments

dataset

Dataset

pred

Predictions or predictors

rvar

Response variable

lev

The level in the response variable defined as success

tvar

Treatment variable

tlev

The level in the treatment variable defined as the treatment

qnt

Number of bins to create

cost

Cost for each connection (e.g., email or mailing)

margin

Margin on each customer purchase

scale

Scaling factor to apply to calculations

train

Use data from training ("Training"), test ("Test"), both ("Both"), or all data ("All") to evaluate model evalbin

data_filter

Expression entered in, e.g., Data > View to filter the dataset in Radiant. The expression should be a string (e.g., "price > 10000")

arr

Expression to arrange (sort) the data on (e.g., "color, desc(price)")

rows

Rows to select from the specified dataset

envir

Environment to extract data from

Details

Evaluate uplift for different (binary) classification models based on predictions. See https://radiant-rstats.github.io/docs/model/evalbin.html for an example in Radiant

Value

A list of results

See Also

summary.evalbin to summarize results

plot.evalbin to plot results

Examples

data.frame(buy = dvd$buy, pred1 = runif(20000), pred2 = ifelse(dvd$buy == "yes", 1, 0)) %>%
  evalbin(c("pred1", "pred2"), "buy") %>%
  str()

radiant.model documentation built on Oct. 16, 2023, 9:06 a.m.