ContextualPrecachingBandit: Bandit: ContextualPrecachingBandit

Description Details Usage Arguments Methods See Also Examples

Description

Illustrates precaching of contexts and rewards.

Details

TODO: Fix "attempt to select more than one element in integerOneIndex"

Contextual extension of BasicBernoulliBandit.

Contextual extension of BasicBernoulliBandit where a user specified d x k dimensional matrix takes the place of BasicBernoulliBandit k dimensional probability vector. Here, each row d represents a feature with k reward probability values per arm.

For every t, ContextualPrecachingBandit randomly samples from its d features/rows at random, yielding a binary context matrix representing sampled (all 1 rows) and unsampled (all 0) features/rows. Next, ContextualPrecachingBandit generates rewards contingent on either sum or mean (default) probabilities of each arm/column over all of the sampled features/rows.

Usage

1

Arguments

weights

numeric matrix; d x k dimensional matrix where each row d represents a feature with k reward probability values per arm.

Methods

new(weights)

generates and instantializes a new ContextualPrecachingBandit instance.

get_context(t)

argument:

  • t: integer, time step t.

returns a named list containing the current d x k dimensional matrix context$X, the number of arms context$k and the number of features context$d.

get_reward(t, context, action)

arguments:

  • t: integer, time step t.

  • context: list, containing the current context$X (d x k context matrix), context$k (number of arms) and context$d (number of context features) (as set by bandit).

  • action: list, containing action$choice (as set by policy).

returns a named list containing reward$reward and, where computable, reward$optimal (used by "oracle" policies and to calculate regret).

generate_bandit_data()

helper function called before Simulator starts iterating over all time steps t in T. Pregenerates contexts and rewards.

See Also

Core contextual classes: Bandit, Policy, Simulator, Agent, History, Plot

Bandit subclass examples: BasicBernoulliBandit, ContextualLogitBandit, OfflineReplayEvaluatorBandit

Policy subclass examples: EpsilonGreedyPolicy, ContextualLinTSPolicy

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
## Not run: 

horizon            <- 100L
simulations        <- 100L

# rows represent features, columns represent arms:

context_weights    <- matrix(  c(0.4, 0.2, 0.4,
                                 0.3, 0.4, 0.3,
                                 0.1, 0.8, 0.1),  nrow = 3, ncol = 3, byrow = TRUE)

bandit             <- ContextualPrecachingBandit$new(weights)

agents             <- list( Agent$new(EpsilonGreedyPolicy$new(0.1), bandit),
                            Agent$new(LinUCBDisjointOptimizedPolicy$new(0.6), bandit))

simulation         <- Simulator$new(agents, horizon, simulations)
history            <- simulation$run()

plot(history, type = "cumulative")


## End(Not run)

Nth-iteration-labs/contextual documentation built on July 28, 2020, 1:13 p.m.