Description Usage Arguments Methods See Also Examples
Contextual Bernoulli multi-armed bandit where at least one context feature is active at a time.
| 1 | 
weightsnumeric matrix; d x k matrix with probabilities of reward for d contextual features
per k arms
new(weights) generates and initializes a new ContextualBinaryBandit
instance. 
get_context(t)argument:
t: integer, time step t.
returns a named list
containing the current d x k dimensional matrix context$X,
the number of arms context$k and the number of features context$d.
get_reward(t, context, action)arguments:
t: integer, time step t.
context: list, containing the current context$X (d x k context matrix),
context$k (number of arms) and context$d (number of context features)
(as set by bandit).
action:  list, containing action$choice (as set by policy).
returns a named list containing reward$reward and, where computable,
reward$optimal (used by "oracle" policies and to calculate regret).
Core contextual classes: Bandit, Policy, Simulator,
Agent, History, Plot
Bandit subclass examples: ContextualBinaryBandit, ContextualLogitBandit,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | ## Not run: 
library(contextual)
horizon            <- 100
sims               <- 100
policy             <- LinUCBDisjointOptimizedPolicy$new(alpha = 0.9)
weights             <- matrix(  c(0.4, 0.2, 0.4,
                                  0.3, 0.4, 0.3,
                                  0.1, 0.8, 0.1),  nrow = 3, ncol = 3, byrow = TRUE)
bandit             <- ContextualBinaryBandit$new(weights = weights)
agent              <- Agent$new(policy,bandit)
history            <- Simulator$new(agent, horizon, sims)$run()
plot(history, type = "cumulative", regret = TRUE)
## End(Not run)
 | 
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.