Description Usage Arguments Methods References See Also Examples
Policy for the evaluation of policies with offline data with modeled rewards per arm.
1 2 3 4 |
formula
formula (required).
Format: y.context ~ z.choice | x1.context + x2.xontext + ... | r1.reward + r2.reward ...
Here, r1.reward to rk.reward represent regression based precalculated rewards per arm.
Adds an intercept to the context model by default. Exclude the intercept, by adding "0" or "-1" to
the list of contextual features, as in: y.context ~ z.choice | x1.context + x2.xontext -1
data
data.table or data.frame; offline data source (required)
k
integer; number of arms (optional). Optionally used to reformat the formula defined x.context vector
as a k x d
matrix. When making use of such matrix formatted contexts, you need to define custom
intercept(s) when and where needed in data.table or data.frame.
d
integer; number of contextual features (optional) Optionally used to reformat the formula defined
x.context vector as a k x d
matrix. When making use of such matrix formatted contexts, you need
to define custom intercept(s) when and where needed in data.table or data.frame.
randomize
logical; randomize rows of data stream per simulation (optional, default: TRUE)
replacement
logical; sample with replacement (optional, default: FALSE)
replacement
logical; add jitter to contextual features (optional, default: FALSE)
unique
integer vector; index of disjoint features (optional)
shared
integer vector; index of shared features (optional)
new(formula, data, k = NULL, d = NULL, unique = NULL, shared = NULL, randomize = TRUE)
generates and instantializes a new OfflineDirectMethodBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
returns a named list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.
get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
returns a named list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).
post_initialization()
Randomize offline data by shuffling the offline data.table before the start of each individual simulation when self$randomize is TRUE (default)
Agarwal, Alekh, et al. "Taming the monster: A fast and simple algorithm for contextual bandits." International Conference on Machine Learning. 2014.
Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: BasicBernoulliBandit
, ContextualLogitBandit
,
OfflineDirectMethodBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualLinTSPolicy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ## Not run:
library(contextual)
library(data.table)
# Import myocardial infection dataset
url <- "http://d1ie9wlkzugsxr.cloudfront.net/data_propensity/myocardial_propensity.csv"
data <- fread(url)
simulations <- 50
horizon <- nrow(data)
# arms always start at 1
data$trt <- data$trt + 1
# turn death into alive, making it a reward
data$alive <- abs(data$death - 1)
# Run regression per arm, predict outcomes, and save results, a column per arm
f <- alive ~ age + male + risk + severity
model_f <- function(arm) glm(f, data=data[trt==arm],
family=binomial(link="logit"),
y=FALSE, model=FALSE)
arms <- sort(unique(data$trt))
model_arms <- lapply(arms, FUN = model_f)
predict_arm <- function(model) predict(model, data, type = "response")
r_data <- lapply(model_arms, FUN = predict_arm)
r_data <- do.call(cbind, r_data)
colnames(r_data) <- paste0("R", (1:max(arms)))
# Bind data and model predictions
data <- cbind(data,r_data)
# Define Bandit
f <- alive ~ trt | age + male + risk + severity | R1 + R2 # y ~ z | x | r
bandit <- OfflineDirectMethodBandit$new(formula = f, data = data)
# Define agents.
agents <- list(Agent$new(LinUCBDisjointOptimizedPolicy$new(0.2), bandit, "LinUCB"),
Agent$new(FixedPolicy$new(1), bandit, "Arm1"),
Agent$new(FixedPolicy$new(2), bandit, "Arm2"))
# Initialize the simulation.
simulation <- Simulator$new(agents = agents, simulations = simulations, horizon = horizon)
# Run the simulation.
sim <- simulation$run()
# plot the results
plot(sim, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "bottomright")
plot(sim, type = "arms", limit_agents = "LinUCB", legend_position = "topright")
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.