mdp_historical: mdp_historical

Description Usage Arguments Value

Description

mdp_historical

Usage

1
2
mdp_historical(transition, reward, discount, model_prior = NULL, state,
  action, model_names = NA, ...)

Arguments

transition

list of transition matrices, one per model

reward

the utility matrix U(x,a) of being at state x and taking action a

discount

the discount factor (1 is no discounting)

model_prior

the prior belief over models, a numeric of length(transitions). Uniform by default

state

sequence of states observed historically

action

sequence of historical actions taken at time of observing that state

model_names

optional vector of names for columns in model posterior distribution. Will be taken from names of transition list if none are provided here.

...

additional arguments to mdp_compute_policy

Value

a list with component "df", a data.frame showing the historical state, historical action, and what action would have been optimal by MDP; and a data.frame showing the evolution of the belief over models during each subsequent observation


cboettig/mdplearning documentation built on May 13, 2019, 2:08 p.m.