Description Usage Arguments Methods See Also Examples
A function based continuum multi-armed bandit where arms are chosen from a subset of the real line and the mean rewards are assumed to be a continuous function of the arms.
1 | bandit <- ContinuumBandit$new(FUN)
|
continuous function.
new(FUN) generates and instantializes a new ContinuumBandit instance.
get_context(t)argument:
t: integer, time step t.
returns a named list
containing the current d x k dimensional matrix context$X,
the number of arms context$k and the number of features context$d.
get_reward(t, context, action)arguments:
t: integer, time step t.
context: list, containing the current context$X (d x k context matrix),
context$k (number of arms) and context$d (number of context features)
(as set by bandit).
action: list, containing action$choice (as set by policy).
returns a named list containing reward$reward and, where computable,
reward$optimal (used by "oracle" policies and to calculate regret).
Core contextual classes: Bandit, Policy, Simulator,
Agent, History, Plot
Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit,
OfflineReplayEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ## Not run:
horizon <- 1500
simulations <- 100
continuous_arms <- function(x) {
-0.1*(x - 5) ^ 2 + 3.5 + rnorm(length(x),0,0.4)
}
int_time <- 100
amplitude <- 0.2
learn_rate <- 0.3
omega <- 2*pi/int_time
x0_start <- 2.0
policy <- LifPolicy$new(int_time, amplitude, learn_rate, omega, x0_start)
bandit <- ContinuumBandit$new(FUN = continuous_arms)
agent <- Agent$new(policy,bandit)
history <- Simulator$new( agents = agent,
horizon = horizon,
simulations = simulations,
save_theta = TRUE )$run()
plot(history, type = "average", regret = FALSE)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.