Description Details Usage Arguments Methods See Also Examples
Samples data from a basic logistic regression model.
ContextualLogitBandit linear predictors are generated from the dot product of a random d dimensional
normal weight vector and uniform random d x k dimensional context matrices with equal weights per
arm. This product is then inverse-logit transformed to generate k dimensional binary (0/1) reward
vectors by randomly sampling from a Bernoulli distribution.
1 | bandit <- ContextualLogitBandit$new(k, d, intercept = TRUE)
|
kinteger; number of bandit arms
dinteger; number of contextual features
interceptlogical; if TRUE (default) it adds a constant (1.0) dimension to each context X at the end.
new(k, d, intercept = TRUE) generates and instantializes a new
ContextualLogitBandit instance.
get_context(t)argument:
t: integer, time step t.
returns a named list
containing the current d x k dimensional matrix context$X,
the number of arms context$k and the number of features context$d.
get_reward(t, context, action)arguments:
t: integer, time step t.
context: list, containing the current context$X (d x k context matrix),
context$k (number of arms) and context$d (number of context features)
(as set by bandit).
action: list, containing action$choice (as set by policy).
returns a named list containing reward$reward and, where computable,
reward$optimal (used by "oracle" policies and to calculate regret).
post_initialization()initializes d x k beta matrix.
Core contextual classes: Bandit, Policy, Simulator,
Agent, History, Plot
Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | ## Not run:
horizon <- 800L
simulations <- 30L
bandit <- ContextualLogitBandit$new(k = 5, d = 5, intercept = TRUE)
agents <- list(Agent$new(ContextualLinTSPolicy$new(0.1), bandit),
Agent$new(EpsilonGreedyPolicy$new(0.1), bandit),
Agent$new(LinUCBGeneralPolicy$new(0.6), bandit),
Agent$new(ContextualEpochGreedyPolicy$new(8), bandit),
Agent$new(LinUCBHybridOptimizedPolicy$new(0.6), bandit),
Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit))
simulation <- Simulator$new(agents, horizon, simulations)
history <- simulation$run()
plot(history, type = "cumulative", regret = FALSE,
rate = TRUE, legend_position = "right")
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.