solve_MDP | R Documentation |
Implementation of value iteration, modified policy iteration and other methods based on reinforcement learning techniques to solve finite state space MDPs.
solve_MDP(model, method = "value", ...)
solve_MDP_DP(
model,
method = "value_iteration",
horizon = NULL,
discount = NULL,
N_max = 1000,
error = 0.01,
k_backups = 10,
U = NULL,
verbose = FALSE
)
solve_MDP_TD(
model,
method = "q_learning",
horizon = NULL,
discount = NULL,
alpha = 0.5,
epsilon = 0.1,
N = 100,
U = NULL,
verbose = FALSE
)
model |
an MDP problem specification. |
method |
string; one of the following solution methods: |
... |
further parameters are passed on to the solver function. |
horizon |
an integer with the number of epochs for problems with a
finite planning horizon. If set to |
discount |
discount factor in range |
N_max |
maximum number of iterations allowed to converge. If the maximum is reached then the non-converged solution is returned with a warning. |
error |
value iteration: maximum error allowed in the utility of any state (i.e., the maximum policy loss) used as the termination criterion. |
k_backups |
policy iteration: number of look ahead steps used for approximate policy evaluation used by the policy iteration method. |
U |
a vector with initial utilities used for each state. If
|
verbose |
logical, if set to |
alpha |
step size in |
epsilon |
used for |
N |
number of episodes used for learning. |
Implemented are the following dynamic programming methods (following Russell and Norvig, 2010):
Modified Policy Iteration starts with a random policy and iteratively performs a sequence of
approximate policy evaluation (estimate the value function for the
current policy using k_backups
and function MDP_policy_evaluation()
), and
policy improvement (calculate a greedy policy given the value function). The algorithm stops when it converges to a stable policy (i.e., no changes between two iterations).
Value Iteration starts with
an arbitrary value function (by default all 0s) and iteratively
updates the value function for each state using the Bellman equation.
The iterations
are terminated either after N_max
iterations or when the solution converges.
Approximate convergence is achieved
for discounted problems (with \gamma < 1
)
when the maximal value function change for any state \delta
is
\delta \le error (1-\gamma) / \gamma
. It can be shown that this means
that no state value is more than
error
from the value in the optimal value function. For undiscounted
problems, we use \delta \le error
.
The greedy policy is calculated from the final value function. Value iteration can be seen as policy iteration with truncated policy evaluation.
Note that the policy converges earlier than the value function.
Implemented are the following temporal difference control methods
described in Sutton and Barto (2020).
Note that the MDP transition and reward models are only used to simulate
the environment for these reinforcement learning methods.
The algorithms use a step size parameter \alpha
(learning rate) for the
updates and the exploration parameter \epsilon
for
the \epsilon
-greedy policy.
If the model has absorbing states to terminate episodes, then no maximal episode length
(horizon
) needs to
be specified. To make sure that the algorithm does finish in a reasonable amount of time,
episodes are stopped after 10,000 actions with a warning. For models without absorbing states,
a episode length has to be specified via horizon
.
Q-Learning is an off-policy temporal difference method that uses
an \epsilon
-greedy behavior policy and learns a greedy target
policy.
Sarsa is an on-policy method that follows and learns
an \epsilon
-greedy policy. The final \epsilon
-greedy policy
is converted into a greedy policy.
Expected Sarsa: We implement an on-policy version that uses
the expected value under the current policy for the update.
It moves deterministically in the same direction as Sarsa
moves in expectation. Because it uses the expectation, we can
set the step size \alpha
to large values and even 1.
solve_MDP()
returns an object of class POMDP which is a list with the
model specifications (model
), the solution (solution
).
The solution is a list with the elements:
policy
a list representing the policy graph. The list only has one element for converged solutions.
converged
did the algorithm converge (NA
) for finite-horizon problems.
delta
final \delta
(value iteration and infinite-horizon only)
iterations
number of iterations to convergence (infinite-horizon only)
Michael Hahsler
Russell, S., Norvig, P. (2021). Artificial Intelligence: A Modern Approach. Fourth edition. Prentice Hall.
Sutton, R. S., Barto, A. G. (2020). Reinforcement Learning: An Introduction. Second edition. The MIT Press.
Other solver:
solve_POMDP()
,
solve_SARSOP()
Other MDP:
MDP()
,
MDP2POMDP
,
MDP_policy_functions
,
accessors
,
actions()
,
add_policy()
,
gridworld
,
reachable_and_absorbing
,
regret()
,
simulate_MDP()
,
transition_graph()
,
value_function()
data(Maze)
Maze
# use value iteration
maze_solved <- solve_MDP(Maze, method = "value_iteration")
maze_solved
policy(maze_solved)
# plot the value function U
plot_value_function(maze_solved)
# Maze solutions can be visualized
gridworld_plot_policy(maze_solved)
# use modified policy iteration
maze_solved <- solve_MDP(Maze, method = "policy_iteration")
policy(maze_solved)
# finite horizon
maze_solved <- solve_MDP(Maze, method = "value_iteration", horizon = 3)
policy(maze_solved)
gridworld_plot_policy(maze_solved, epoch = 1)
gridworld_plot_policy(maze_solved, epoch = 2)
gridworld_plot_policy(maze_solved, epoch = 3)
# create a random policy where action n is very likely and approximate
# the value function. We change the discount factor to .9 for this.
Maze_discounted <- Maze
Maze_discounted$discount <- .9
pi <- random_MDP_policy(Maze_discounted,
prob = c(n = .7, e = .1, s = .1, w = 0.1))
pi
# compare the utility function for the random policy with the function for the optimal
# policy found by the solver.
maze_solved <- solve_MDP(Maze)
MDP_policy_evaluation(pi, Maze, k_backup = 100)
MDP_policy_evaluation(policy(maze_solved), Maze, k_backup = 100)
# Note that the solver already calculates the utility function and returns it with the policy
policy(maze_solved)
# Learn a Policy using Q-Learning
maze_learned <- solve_MDP(Maze, method = "q_learning", N = 100)
maze_learned
maze_learned$solution
policy(maze_learned)
plot_value_function(maze_learned)
gridworld_plot_policy(maze_learned)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.