Simulate a first-Order Markov Chain

Share:

Description

Simulates a first-order Markov chain.

Usage

1
simulateMarkovChain(n, trans.mat, init.dist=NULL, states=colnames(trans.mat))

Arguments

n

the length of the sample path to simulate.

trans.mat

The transition matrix of the Markov chain to simulate.

init.dist

The initial distribution to use for starting the simulation. If it is not specified, the stationary distribution of the Markov chain will be computed from trans.mat and used to start the simulation in the steady state.

states

This argument can be used to override the labels on the transition matrix (if any) and name the states output on the sample path.

Details

trans.mat must be a stochastic matrix. It must either have both row and column names, in which case they must agree, or no row and column names at all. The row/column names will be used to label the states visited by the Markov chain in the sample path simulated. If states is specified, it will be used to label the states of the Markov chain instead of the row/column names of trans.mat, in which the length of states must agree with the dimension of trans.mat. If trans.mat has no row/column names and states is not specified, then the states of the Markov chain will be labelled 1,…,n, where n is the dimension of trans.mat.

Value

A vector of length n containing a realisation of the specified Markov chain.

Author(s)

Andrew Hart and Servet Mart<ed>nez

See Also

estimateMarkovChain, rstochmat, rcspr2mat

Examples

1
2
simulateMarkovChain(50, matrix(c(.8, .2, .2, .8), ncol=2))
simulateMarkovChain(50, rstochmat(3), states=c("yes", "no", "maybe"))

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.