post_correct: Run Post-correction for Approximate MCMC using psi-APF

View source: R/post_correction.R

post_correctR Documentation

Run Post-correction for Approximate MCMC using \psi-APF

Description

Function post_correct updates previously obtained approximate MCMC output with post-correction weights leading to asymptotically exact weighted posterior, and returns updated MCMC output where components weights, posterior, alpha, alphahat, and Vt are updated (depending on the original output type).

Usage

post_correct(
  model,
  mcmc_output,
  particles,
  threads = 1L,
  is_type = "is2",
  seed = sample(.Machine$integer.max, size = 1)
)

Arguments

model

Model of class nongaussian or ssm_nlg.

mcmc_output

An output from run_mcmc used to compute the MAP estimate of theta. While the intended use assumes this is from approximate MCMC, it is not actually checked, i.e., it is also possible to input previous (asymptotically) exact output.

particles

Number of particles for \psi-APF (positive integer). Suitable values depend on the model and the data, but often relatively small value less than say 50 is enough. See also suggest_N

threads

Number of parallel threads (positive integer, default is 1).

is_type

Type of IS-correction. Possible choices are "is3" for simple importance sampling (weight is computed for each MCMC iteration independently), "is2" for jump chain importance sampling type weighting (default), or "is1" for importance sampling type weighting where the number of particles used forweight computations is proportional to the length of the jump chain block.

seed

Seed for the C++ RNG (positive integer).

Value

The original object of class mcmc_output with updated weights, log-posterior values and state samples or summaries (depending on the mcmc_output$mcmc_type).

References

Doucet A, Pitt M K, Deligiannidis G, Kohn R (2018). Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika, 102, 2, 295-313, https://doi.org/10.1093/biomet/asu075

Vihola M, Helske J, Franks J (2020). Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo. Scand J Statist. 1-38. https://doi.org/10.1111/sjos.12492

Examples


set.seed(1)
n <- 300
x1 <- sin((2 * pi / 12) * 1:n)
x2 <- cos((2 * pi / 12) * 1:n)
alpha <- numeric(n)
alpha[1] <- 0
rho <- 0.7
sigma <- 2
mu <- 1
for(i in 2:n) {
  alpha[i] <- rnorm(1, mu * (1 - rho) + rho * alpha[i-1], sigma)
}
u <- rpois(n, 50)
y <- rbinom(n, size = u, plogis(0.5 * x1 + x2 + alpha))

ts.plot(y / u)

model <- ar1_ng(y, distribution = "binomial", 
  rho = uniform(0.5, -1, 1), sigma = gamma_prior(1, 2, 0.001),
  mu = normal(0, 0, 10),
  xreg = cbind(x1,x2), beta = normal(c(0, 0), 0, 5),
  u = u)

out_approx <- run_mcmc(model, mcmc_type = "approx", 
  local_approx = FALSE, iter = 50000)

out_is2 <- post_correct(model, out_approx, particles = 30,
  threads = 2)
out_is2$time

summary(out_approx, return_se = TRUE)
summary(out_is2, return_se = TRUE)

# latent state
library("dplyr")
library("ggplot2")
state_approx <- as.data.frame(out_approx, variable = "states") |> 
  group_by(time) |>
  summarise(mean = mean(value))
  
state_exact <- as.data.frame(out_is2, variable = "states") |> 
  group_by(time) |>
  summarise(mean = weighted.mean(value, weight))

dplyr::bind_rows(approx = state_approx, 
  exact = state_exact, .id = "method") |>
  filter(time > 200) |>
ggplot(aes(time, mean, colour = method)) + 
  geom_line() + 
  theme_bw()

# posterior means
p_approx <- predict(out_approx, model, type = "mean", 
  nsim = 1000, future = FALSE) |> 
  group_by(time) |>
  summarise(mean = mean(value))
p_exact <- predict(out_is2, model, type = "mean", 
  nsim = 1000, future = FALSE) |> 
  group_by(time) |>
  summarise(mean = mean(value))

dplyr::bind_rows(approx = p_approx, 
  exact = p_exact, .id = "method") |>
  filter(time > 200) |>
ggplot(aes(time, mean, colour = method)) + 
  geom_line() + 
  theme_bw() 


bssm documentation built on Nov. 2, 2023, 6:25 p.m.