Description Usage Arguments Details Value References Examples
This function draws samples from a Empirical Likelihood Bayesian posterior distribution of parameters using Hamiltonian Monte Carlo.
1 2 3 |
initial |
a vector containing the initial values of the parameters |
data |
a matrix containing the data |
fun |
the estimating function g. It takes in a parameter vector
|
dfun |
a function that calculates the gradient of the estimating function
g. It takes in a parameter vector |
prior |
a function with one argument |
dprior |
a function with one argument |
n.samples |
number of samples to draw |
lf.steps |
number of leap frog steps in each Hamiltonian Monte Carlo update |
epsilon |
the leap frog step size(s). This has to be a single numeric
value or a vector of the same length as |
p.variance |
the covariance matrix of a multivariate normal distribution used to generate the initial values of momentum p in Hamiltonian Monte Carlo. This can also be a single numeric value or a vector. See Details. |
tol |
EL tolerance |
detailed |
If this is set to |
FUN |
the same as |
DFUN |
the same as |
Suppose there are data x = (x_1, x_2, ..., x_n) where x_i takes values in R^p and follow probability distribution F. Also, F comes from a family of distributions that depends on a parameter θ = (θ_1, ..., θ_d) and there is a smooth function g(x_i, θ) = (g_1(x_i, θ), ...,g_q(x_i, θ))^T that satisfies E_F[g(x_i, θ)] = 0 for i = 1, ...,n.
ELHMC
draws samples from a Empirical Likelihood Bayesian
posterior distribution of the parameter θ, given the data x
as data
, the smoothing function g as fun
,
and the gradient of g as dfun
or G(X) =
(g(x_1), g(x_2), ..., g(x_n))^T as FUN
and the gradient of G
as DFUN
.
The function returns a list with the following elements:
|
A matrix containing the parameter samples |
|
The acceptance rate |
|
The matched call |
If detailed = TRUE
, the list contains these extra elements:
|
A matrix containing the proposed values at
|
|
A logical vector indicating whether each proposed value is accepted |
|
A list with 2 elements |
Chaudhuri, S., Mondal, D. and Yin, T. (2015) Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. Journal of the Royal Statistical Society: Series B.
Neal, R. (2011) MCMC for using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo (eds S. Brooks, A.Gelman, G. L.Jones and X.-L. Meng), pp. 113-162. New York: Taylor and Francis.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | ## Not run:
## Suppose there are four data points (1, 1), (1, -1), (-1, -1), (-1, 1)
x = rbind(c(1, 1), c(1, -1), c(-1, -1), c(-1, 1))
## If the parameter of interest is the mean, the smoothing function and
## its gradient would be
f <- function(params, x) {
x - params
}
df <- function(params, x) {
rbind(c(-1, 0), c(0, -1))
}
## Draw 50 samples from the Empirical Likelihood Bayesian posterior distribution
## of the mean, using initial values (0.96, 0.97) and standard normal distributions
## as priors:
normal_prior <- function(x) {
exp(-0.5 * x[1] ^ 2) / sqrt(2 * pi) * exp(-0.5 * x[2] ^ 2) / sqrt(2 * pi)
}
normal_prior_log_gradient <- function(x) {
-x
}
set.seed(1234)
mean.samples <- ELHMC(initial = c(0.96, 0.97), data = x, fun = f, dfun = df,
n.samples = 50, prior = normal_prior,
dprior = normal_prior_log_gradient)
plot(mean.samples$samples, type = "l", xlab = "", ylab = "")
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.