Description Usage Arguments Details Value Author(s) References Examples
CBT
and EMp_CBT
provide simution to infinite arms with Bernoulli Rewards.
CBT
assumes prior ditribution in known whereas EMp_CBT
does not. Ana_CBT
performs analysis to real data.
1 2 3 |
n |
total number of rewards. |
prior |
prior distribution on mean of the rewards. Currently avaiable priors: "Uniform", "Sine" and "Cosine". |
bn |
bn should increse slowly to infinity with n. |
cn |
cn should increse slowly to infinity with n. |
data |
A matrix or dataframe. Each column is a population. |
If bn
or cn
are not specified they assume the default value of log(log(n))
.
The confidence bound for an arm with t observations is
L = max ( xbar/bn, xbar-cn*sigma/sqrt(t) ),
where xbar and sigma are the mean and standatd deviation of the rewards from that paticular arm.
CBT is a non-recalling algorithm. An arm is played until its confidence bound L drops below the target mean μ_*, and it is not played after that.
If the prior distribution is unknown, we shall apply empirical CBT, in which the target mean μ_* is replaced by S/n, with S the sum of rewards among all arms played at current stage. Unlike CBT howerver empirical CBT is a recalling algorithm which decides from among all arms which to play further, rather than to consider only the current arm.
A list including elements
regret |
cumulative regret generated by n rewards. |
K |
total number of experimented arms. |
Hock Peng Chan and Shouri Hu
H.P. Chan and S. Hu (2018) Infinite Arms Bandit: Optimality via Confidence Bounds <arXiv:1805.11793>
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.