calcSteadyStatePr: Calculate the steady state transition probabilities for the...

Description Usage Arguments Value Author(s)

View source: R/loadMDP.R

Description

Assume that we consider an ergodic/irreducible time-homogeneous Markov chain specified using a policy in the MDP.

Usage

1

Arguments

mdp

The MDP loaded using loadMDP.

Value

A vector stady state probabilities for all the states.

Author(s)

Lars Relund lars@relund.dk


MDP documentation built on May 2, 2019, 6:48 p.m.

Related to calcSteadyStatePr in MDP...