Description Details Usage Arguments Methods References See Also Examples
Samples data from linearly parameterized arms.
The reward for context X and arm j is given by X^T beta_j, for some latent set of parameters beta_j : j = 1, ..., k. The beta's are sampled uniformly at random, the contexts are Gaussian, and sigma-noise is added to the rewards.
| 1 |   bandit <- ContextualLinearBandit$new(k, d, sigma = 0.1, binary_rewards = FALSE)
 | 
kinteger; number of bandit arms
dinteger; number of contextual features
sigmanumeric; standard deviation of the additive noise. Set to zero for no noise. Default is 0.1
binary_rewardslogical; when set to FALSE (default) ContextualLinearBandit generates Gaussian rewards.
When set to TRUE, rewards are binary (0/1).
new(k, d, sigma = 0.1, binary_rewards = FALSE) generates and
instantializes a new ContextualLinearBandit instance. 
get_context(t)argument:
t: integer, time step t.
returns a named list
containing the current d x k dimensional matrix context$X,
the number of arms context$k and the number of features context$d.
get_reward(t, context, action)arguments:
t: integer, time step t.
context: list, containing the current context$X (d x k context matrix),
context$k (number of arms) and context$d (number of context features)
(as set by bandit).
action:  list, containing action$choice (as set by policy).
returns a named list containing reward$reward and, where computable,
reward$optimal (used by "oracle" policies and to calculate regret).
post_initialization()initializes d x k beta matrix.
Riquelme, C., Tucker, G., & Snoek, J. (2018). Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling. arXiv preprint arXiv:1802.09127.
Implementation follows https://github.com/tensorflow/models/tree/master/research/deep_contextual_bandits
Core contextual classes: Bandit, Policy, Simulator,
Agent, History, Plot
Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | ## Not run: 
horizon       <- 800L
simulations   <- 30L
bandit        <- ContextualLinearBandit$new(k = 5, d = 5)
agents        <- list(Agent$new(EpsilonGreedyPolicy$new(0.1), bandit),
                      Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit))
simulation     <- Simulator$new(agents, horizon, simulations)
history        <- simulation$run()
plot(history, type = "cumulative", regret = FALSE, rate = TRUE, legend_position = "right")
## End(Not run)
 | 
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.