mdp_bellman_operator: Applies the Bellman operator

Description Usage Arguments Details Value Examples

Description

Applies the Bellman operator to a value function Vprev and returns a new value function and a Vprev-improving policy.

Usage

1
mdp_bellman_operator(P, PR, discount, Vprev)

Arguments

P

transition probability array. P can be a 3 dimensions array [S,S,A] or a list [[A]], each element containing a sparse matrix [S,S].

PR

reward array. PR can be a 2 dimension array [S,A] possibly sparse.

discount

discount factor. discount is a real number belonging to ]0; 1].

Vprev

value fonction. Vprev is a vector of length S.

Details

mdp_bellman_operator applies the Bellman operator: PR + discount*P*Vprev to the value function Vprev. Returns a new value function and a Vprev-improving policy.

Value

V

new value fonction. V is a vector of length S.

policy

policy is a vector of length S. Each element is an integer corresponding to an action.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# With a non-sparse matrix
P <- array(0, c(2,2,2))
P[,,1] <- matrix(c(0.5, 0.5, 0.8, 0.2), 2, 2, byrow=TRUE)
P[,,2] <- matrix(c(0, 1, 0.1, 0.9), 2, 2, byrow=TRUE)
R <- matrix(c(5, 10, -1, 2), 2, 2, byrow=TRUE)
mdp_bellman_operator(P, R, 0.9, c(0,0))

# With a sparse matrix
P <- list()
P[[1]] <- Matrix(c(0.5, 0.5, 0.8, 0.2), 2, 2, byrow=TRUE, sparse=TRUE)
P[[2]] <- Matrix(c(0, 1, 0.1, 0.9), 2, 2, byrow=TRUE, sparse=TRUE)
mdp_bellman_operator(P, R, 0.9, c(0,0))

MDPtoolbox documentation built on May 2, 2019, 2:10 p.m.