Algorithm 1 LinUCB with unique linear models A Contextual-Bandit Approach to Personalized News Article Recommendation
Lihong Li et all
Each time step t,
LinUCBGeneralPolicy runs a linear regression per arm that produces coefficients for each context feature
It then observes the new context, and generates a predicted payoff or reward together with a confidence interval for each available arm.
It then proceeds to choose the arm with the highest upper confidence bound.
double, a positive real value R+; Hyper-parameter adjusting the balance between exploration and exploitation.
character string specifying this policy.
is, among others, saved to the History log and displayed in summaries and plots.
d*d identity matrix
a zero vector of length d
new(alpha = 1)
Generates a new
LinUCBGeneralPolicy object. Arguments are defined in the Argument section above.
each policy needs to assign the parameters it wants to keep track of
self$theta_to_arms that has to be defined in
The parameters defined here can later be accessed by arm index in the following way:
here, a policy decides which arm to choose, based on the current values of its parameters and, potentially, the current context.
set_reward(reward, context), a policy updates its parameter values
based on the reward received, and, potentially, the current context.
Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010, April). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (pp. 661-670). ACM.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.