evalbin: Evaluate the performance of different (binary) classification...

View source: R/evalbin.R

evalbinR Documentation

Evaluate the performance of different (binary) classification models

Description

Evaluate the performance of different (binary) classification models

Usage

evalbin(
  dataset,
  pred,
  rvar,
  lev = "",
  qnt = 10,
  cost = 1,
  margin = 2,
  scale = 1,
  train = "All",
  data_filter = "",
  arr = "",
  rows = NULL,
  envir = parent.frame()
)

Arguments

dataset

Dataset

pred

Predictions or predictors

rvar

Response variable

lev

The level in the response variable defined as success

qnt

Number of bins to create

cost

Cost for each connection (e.g., email or mailing)

margin

Margin on each customer purchase

scale

Scaling factor to apply to calculations

train

Use data from training ("Training"), test ("Test"), both ("Both"), or all data ("All") to evaluate model evalbin

data_filter

Expression entered in, e.g., Data > View to filter the dataset in Radiant. The expression should be a string (e.g., "price > 10000")

arr

Expression to arrange (sort) the data on (e.g., "color, desc(price)")

rows

Rows to select from the specified dataset

envir

Environment to extract data from

Details

Evaluate different (binary) classification models based on predictions. See https://radiant-rstats.github.io/docs/model/evalbin.html for an example in Radiant

Value

A list of results

See Also

summary.evalbin to summarize results

plot.evalbin to plot results

Examples

data.frame(buy = dvd$buy, pred1 = runif(20000), pred2 = ifelse(dvd$buy == "yes", 1, 0)) %>%
  evalbin(c("pred1", "pred2"), "buy") %>%
  str()

radiant-rstats/radiant.model documentation built on Nov. 29, 2023, 5:59 a.m.