mdp_compute_policy: compute mdp policy

Description Usage Arguments Value Examples

Description

compute mdp policy

Usage

1
2
3
mdp_compute_policy(transition, reward, discount, model_prior = NULL,
  max_iter = 500, epsilon = 1e-05, Tmax = max_iter,
  type = c("value iteration", "policy iteration", "finite time"))

Arguments

transition

list of transition matrices, one per model

reward

the utility matrix U(x,a) of being at state x and taking action a

discount

the discount factor (1 is no discounting)

model_prior

the prior belief over models, a numeric of length(transitions). Uniform by default

max_iter

maximum number of iterations to perform

epsilon

convergence tolerance

Tmax

termination time for finite time calculation, ignored otherwise

type

consider converged when policy converges or when value converges?

Value

a data.frame with the optimal policy and (discounted) value associated with each state

Examples

1
2
3
4
5
source(system.file("examples/K_models.R", package="mdplearning"))
transition <- lapply(models, `[[`, "transition")
reward <- models[[1]][["reward"]]
df <- mdp_compute_policy(transition, reward, discount)
plot(df$state, df$state - df$policy, xlab = "stock", ylab="escapement")

cboettig/mdplearning documentation built on May 13, 2019, 2:08 p.m.