MCMCregressChange | R Documentation |
This function generates a sample from the posterior distribution of a linear Gaussian model with multiple changepoints. The function uses the Markov chain Monte Carlo method of Chib (1998). The user supplies data and priors, and a sample from the posterior distribution is returned as an mcmc object, which can be subsequently analyzed with functions provided in the coda package.
MCMCregressChange(
formula,
data = parent.frame(),
m = 1,
b0 = 0,
B0 = 0,
c0 = 0.001,
d0 = 0.001,
sigma.mu = NA,
sigma.var = NA,
a = NULL,
b = NULL,
mcmc = 1000,
burnin = 1000,
thin = 1,
verbose = 0,
seed = NA,
beta.start = NA,
P.start = NA,
random.perturb = FALSE,
WAIC = FALSE,
marginal.likelihood = c("none", "Chib95"),
...
)
formula |
Model formula. |
data |
Data frame. |
m |
The number of changepoints. |
b0 |
The prior mean of |
B0 |
The prior precision of |
c0 |
|
d0 |
|
sigma.mu |
The mean of the inverse Gamma prior on
|
sigma.var |
The variacne of the inverse Gamma prior on
|
a |
|
b |
|
mcmc |
The number of MCMC iterations after burnin. |
burnin |
The number of burn-in iterations for the sampler. |
thin |
The thinning interval used in the simulation. The number of MCMC iterations must be divisible by this value. |
verbose |
A switch which determines whether or not the progress of the
sampler is printed to the screen. If |
seed |
The seed for the random number generator. If NA, the Mersenne
Twister generator is used with default seed 12345; if an integer is passed
it is used to seed the Mersenne twister. The user can also pass a list of
length two to use the L'Ecuyer random number generator, which is suitable
for parallel computation. The first element of the list is the L'Ecuyer
seed, which is a vector of length six or NA (if NA a default seed of
|
beta.start |
The starting values for the |
P.start |
The starting values for the transition matrix. A user should
provide a square matrix with dimension equal to the number of states. By
default, draws from the |
random.perturb |
If TRUE, randomly sample hidden states whenever regularly sampled hidden states have at least one single observation state (SOS). SOS is a sign of overfitting in non-ergodic hidden Markov models. |
WAIC |
Compute the Widely Applicable Information Criterion (Watanabe 2010). |
marginal.likelihood |
How should the marginal likelihood be calculated?
Options are: |
... |
further arguments to be passed |
MCMCregressChange
simulates from the posterior distribution of the
linear regression model with multiple changepoints.
The model takes the following form:
y_t=x_t ' \beta_i + I(s_t=i)\varepsilon_{t},\;\; i=1, \ldots, k
Where k
is the number of states and I(s_t=i)
is an
indicator function that becomes 1 when a state at t
is
i
and otherwise 0.
The errors are assumed to be Gaussian in each regime:
I(s_t=i)\varepsilon_{t} \sim \mathcal{N}(0, \sigma^2_i)
We assume standard, semi-conjugate priors:
\beta_i \sim \mathcal{N}(b_0,B_0^{-1}),\;\; i=1, \ldots, k
And:
\sigma^{-2}_i \sim \mathcal{G}amma(c_0/2, d_0/2),\;\; i=1, \ldots, k
Where \beta_i
and \sigma^{-2}_i
are assumed a
priori independent.
The simulation proper is done in compiled C++ code to maximize efficiency.
An mcmc object that contains the posterior sample. This object can
be summarized by functions provided by the coda package. The object
contains an attribute prob.state
storage matrix that contains the
probability of state_i
for each period, the log-likelihood of
the model (loglike
), and the log-marginal likelihood of the model
(logmarglike
).
Jong Hee Park, 2012. “Unified Method for Dynamic and Cross-Sectional Heterogeneity: Introducing Hidden Markov Panel Models.” American Journal of Political Science.56: 1040-1054. <doi: 10.1111/j.1540-5907.2012.00590.x>
Sumio Watanabe. 2010. "Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory" Journal of Machine Learning Research. 11: 3571-3594.
Siddhartha Chib. 1995. "Marginal Likelihood from the Gibbs Output." Journal of the American Statistical Association. 90: 1313-1321. <doi: 10.1016/S0304-4076(97)00115-2>
Siddhartha Chib. 1998. "Estimation and comparison of multiple change-point models." Journal of Econometrics. 86: 221-241. <doi: 10.1080/01621459.1995.10476635>
Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park. 2011. “MCMCpack: Markov Chain Monte Carlo in R.”, Journal of Statistical Software. 42(9): 1-21. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v042.i09")}.
plotState
, plotChangepoint
## Not run:
set.seed(1119)
n <- 100
x1 <- runif(n)
true.beta1 <- c(2, -2)
true.beta2 <- c(0, 2)
true.Sigma <- c(1, 2)
true.s <- rep(1:2, each=n/2)
mu1 <- cbind(1, x1[true.s==1])%*%true.beta1
mu2 <- cbind(1, x1[true.s==2])%*%true.beta2
y <- as.ts(c(rnorm(n/2, mu1, sd=sqrt(true.Sigma[1])), rnorm(n/2, mu2, sd=sqrt(true.Sigma[2]))))
formula=y ~ x1
ols1 <- lm(y[true.s==1] ~x1[true.s==1])
ols2 <- lm(y[true.s==2] ~x1[true.s==2])
## prior
b0 <- 0
B0 <- 0.1
sigma.mu=sd(y)
sigma.var=var(y)
## models
model0 <- MCMCregressChange(formula, m=0, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
model1 <- MCMCregressChange(formula, m=1, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
model2 <- MCMCregressChange(formula, m=2, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
model3 <- MCMCregressChange(formula, m=3, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
model4 <- MCMCregressChange(formula, m=4, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
model5 <- MCMCregressChange(formula, m=5, b0=b0, B0=B0, mcmc=100, burnin=100,
sigma.mu=sigma.mu, sigma.var=sigma.var, marginal.likelihood="Chib95")
print(BayesFactor(model0, model1, model2, model3, model4, model5))
plotState(model1)
plotChangepoint(model1)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.