mdp_LP: Solves discounted MDP using linear programming algorithm

Description Usage Arguments Details Value Examples

Description

Solves discounted MDP with linear programming

Usage

1
mdp_LP(P, R, discount)

Arguments

P

transition probability array. P is a 3 dimensions array [S,S,A]. Sparse matrix are not supported.

R

reward array. R can be a 3 dimensions array [S,S,A] or a list [[A]], each element containing a sparse matrix [S,S] or a 2 dimensional matrix [S,A] possibly sparse.

discount

discount factor. discount is a real which belongs to ]0; 1[

Details

mdp_LP applies linear programming to solve discounted MDP for non-sparse matrix only.

Value

V

optimal value fonction. V is a S length vector

policy

optimal policy. policy is a S length vector. Each element is an integer corresponding to an action which maximizes the value function

cpu_time

CPU time used to run the program

Examples

1
2
3
4
5
6
# Only with a non-sparse matrix
P <- array(0, c(2,2,2))
P[,,1] <- matrix(c(0.5, 0.5, 0.8, 0.2), 2, 2, byrow=TRUE)
P[,,2] <- matrix(c(0, 1, 0.1, 0.9), 2, 2, byrow=TRUE)
R <- matrix(c(5, 10, -1, 2), 2, 2, byrow=TRUE)
mdp_LP(P, R, 0.9)

Example output

Loading required package: Matrix
Loading required package: linprog
Loading required package: lpSolve
$V
[1] 42.44186 36.04651

$policy
[1] 2 1

$time
Time difference of 0.1772285 secs

MDPtoolbox documentation built on May 2, 2019, 2:10 p.m.