DTMC: Simulation of Discrete-Time/State Markov Chain

Description Usage Arguments Value Author(s) References See Also Examples

View source: R/DTMC.R

Description

This function simulates iterations through a discrete time Markov Chain. A Markov Chain is a discrete Markov Process with a state space that usually consists of positive integers. The advantage of a Markov process in a stochastic modeling context is that conditional dependencies over time are manageable because the probabilistic future of the process depends only on the present state, not the past. Therefore, if we specify an initial distribution as well as a transition matrix, we can simulate many periods into the future without any further information. Future transition probabilities can be computed by raising the transition matrix to higher-and higher powers, but this method is not numerically tractable for large matrices. My method uses a uniform random variable to iterate a user-specified number of iterations of a Markov Chain based on the transition probabilities and the initital distribution. A graphical output is also available in the form of a trace plot.

Usage

1
DTMC(tmat, io, N, trace)

Arguments

tmat

Transition matrix-rows must sum to 1 and the number of rows and columns must be equal.

io

Initial observation, 1 column, must sum to 1, must be the same length as transition matrix.

N

Number of simulations.

trace

Optional trace plot, specify as TRUE or FALSE.

Value

Trace

Trace-plot of the iterations through states (if selected)

State

An n x nrow(tmat) matrix detailing the iterations through each state of the Markov Chain

Author(s)

Will Nicholson

References

"Adventures in Stochastic Processes" by Sidney Resnick

See Also

MultDTMC

Examples

1
2
3
4
data(gr)
data(id)
DTMC(gr,id,10,trace=TRUE) 
# 10 iterations through "Gambler's ruin"

DTMCPack documentation built on May 2, 2019, 2:06 a.m.