View source: R/mcmc_samplers.R
btf_reg | R Documentation |
Run the MCMC for Bayesian trend filtering regression with a penalty on first (D=1) or second (D=2) differences of each dynamic regression coefficient. The penalty is determined by the prior on the evolution errors, which include:
the dynamic horseshoe prior ('DHS');
the static horseshoe prior ('HS');
the Bayesian lasso ('BL');
the normal stochastic volatility model ('SV');
the normal-inverse-gamma prior ('NIG').
In each case, the evolution error is a scale mixture of Gaussians. Sampling is accomplished with a (parameter-expanded) Gibbs sampler, mostly relying on a dynamic linear model representation.
btf_reg(
y,
X = NULL,
evol_error = "DHS",
D = 1,
useObsSV = FALSE,
nsave = 1000,
nburn = 1000,
nskip = 4,
mcmc_params = list("mu", "yhat", "beta", "evol_sigma_t2", "obs_sigma_t2", "dhs_phi",
"dhs_mean"),
use_backfitting = FALSE,
computeDIC = TRUE,
verbose = TRUE
)
y |
the |
X |
the |
evol_error |
the evolution error distribution; must be one of 'DHS' (dynamic horseshoe prior), 'HS' (horseshoe prior), 'BL' (Bayesian lasso), or 'NIG' (normal-inverse-gamma prior) |
D |
degree of differencing (D = 1 or D = 2) |
useObsSV |
logical; if TRUE, include a (normal) stochastic volatility model for the observation error variance |
nsave |
number of MCMC iterations to record |
nburn |
number of MCMC iterations to discard (burin-in) |
nskip |
number of MCMC iterations to skip between saving iterations, i.e., save every (nskip + 1)th draw |
mcmc_params |
named list of parameters for which we store the MCMC output; must be one or more of:
|
use_backfitting |
logical; if TRUE, use backfitting to sample the predictors j=1,...,p (faster, but usually less MCMC efficient) |
computeDIC |
logical; if TRUE, compute the deviance information criterion |
verbose |
logical; should R report extra information on progress? |
A named list of the nsave
MCMC samples for the parameters named in mcmc_params
The data y
may contain NAs, which will be treated with a simple imputation scheme
via an additional Gibbs sampling step. In general, rescaling y
to have unit standard
deviation is recommended to avoid numerical issues.
# Example 1: all signals
simdata = simRegression(T = 200, p = 5, p_0 = 0)
y = simdata$y; X = simdata$X
out = btf_reg(y, X)
for(j in 1:ncol(X))
plot_fitted(rep(0, length(y)),
mu = colMeans(out$beta[,,j]),
postY = out$beta[,,j],
y_true = simdata$beta_true[,j])
## Not run:
# Example 2: some noise, longer series
simdata = simRegression(T = 500, p = 10, p_0 = 5)
y = simdata$y; X = simdata$X
out = btf_reg(y, X, nsave = 1000, nskip = 0) # Short MCMC run for a quick example
for(j in 1:ncol(X))
plot_fitted(rep(0, length(y)),
mu = colMeans(out$beta[,,j]),
postY = out$beta[,,j],
y_true = simdata$beta_true[,j])
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.