Description Usage Arguments Value Author(s) References See Also Examples

This function simulates iterations through a discrete time Markov Chain. A Markov Chain is a discrete Markov Process with a state space that usually consists of positive integers. The advantage of a Markov process in a stochastic modeling context is that conditional dependencies over time are manageable because the probabilistic future of the process depends only on the present state, not the past. Therefore, if we specify an initial distribution as well as a transition matrix, we can simulate many periods into the future without any further information. Future transition probabilities can be computed by raising the transition matrix to higher-and higher powers, but this method is not numerically tractable for large matrices. My method uses a uniform random variable to iterate a user-specified number of iterations of a Markov Chain based on the transition probabilities and the initital distribution. A graphical output is also available in the form of a trace plot.

1 |

`tmat` |
Transition matrix-rows must sum to 1 and the number of rows and columns must be equal. |

`io` |
Initial observation, 1 column, must sum to 1, must be the same length as transition matrix. |

`N` |
Number of simulations. |

`trace` |
Optional trace plot, specify as TRUE or FALSE. |

`Trace ` |
Trace-plot of the iterations through states (if selected) |

`State ` |
An n x nrow(tmat) matrix detailing the iterations through each state of the Markov Chain |

Will Nicholson

"Adventures in Stochastic Processes" by Sidney Resnick

1 2 3 4 |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.