This function approximates the logarithm of the marginal likelihood (LML), where the marginal likelihood is also called the integrated likelihood or the prior predictive distribution of y in Bayesian inference. The marginal likelihood is
p(y) = integral p(y | Theta)p(Theta) d Theta
The prior predictive distribution indicates what y
should look like, given the model, before y has been
observed. The presence of the marginal likelihood of
y normalizes the joint posterior distribution,
p(Theta|y), ensuring it is a proper
distribution and integrates to one (see
marginal likelihood is the denominator of Bayes' theorem, and is often
omitted, serving as a constant of proportionality. Several methods of
approximation are available.
This is the model specification for the model that was
updated either in
This is the list of data passed to the model
specification. This argument is used only with the
This is a vector of the posterior modes (or medians, in
the case of MCMC). This argument is used only with the
This is a matrix of posterior samples (parameters only),
and is specified only with the
This is a vector of MCMC samples of the log-likelihood, and
is specified only with the
This argument accepts the covariance matrix of the
posterior modes, and is used only with the
The method may be
Generally, a user of
VariationalBayes does not need to
LML function, because these methods already include
LML may be called by the user, should the user
desire to estimate the logarithm of the marginal likelihood with a
different method, or with non-stationary chains. The
functions only call
LML when all parameters are stationary, and
only with non-adaptive algorithms.
GD method, where GD stands for Gelfand-Dey (1994), is a
modification of the harmonic mean estimator (HME) that results in a
more stable estimator of the logarithm of the marginal
likelihood. This method is unbiased, simulation-consistent, and
usually satisfies the Gaussian central limit theorem.
HME method, where HME stands for harmonic mean estimator,
of Newton-Raftery (1994) is the easiest, and therefore fastest,
estimation of the logarithm of the marginal likelihood. However, it is
an unreliable estimator and should be avoided, because small
likelihood values can overly influence the estimator, variance is
often infinite, and the Gaussian central limit theorem is usually not
satisfied. It is included here for completeness. There is not a
function in this package that uses this method by default. Given
N samples, the estimator is
1 / [1/N sum_N exp(-LL)].
LME method uses the Laplace-Metropolis Estimator (LME), in
which the estimation of the Hessian matrix is approximated
numerically. It is the slowest method here, though it returns an
estimate in more cases than the other methods. The supplied
Model specification must be executed a number of times equal
to k^2 x 2, where k is the number of
parameters. In large dimensions, this is very slow. The
Laplace-Metropolis Estimator is inappropriate with hierarchical
LME when it has converged and
in which case it uses the posterior means or modes, and is itself
The Laplace-Metropolis Estimator (LME) is the logarithmic form of equation 4 in Lewis and Raftery (1997). In a non-hierarchical model, the marginal likelihood may easily be approximated with the Laplace-Metropolis Estimator for model m as
p(y|m) = (2*pi)^(d_m/2) |Sigma_m|^(1/2) p(y|Theta_m, m)p(Theta_m|m)
where d is the number of parameters and Sigma is the inverse of the negative of the approximated Hessian matrix of second derivatives.
As a rough estimate of Kass and Raftery (1995), LME is worrisome when the sample size of the data is less than five times the number of parameters, and LME should be adequate in most problems when the sample size of the data exceeds twenty times the number of parameters (p. 778).
NSIS method is essentially the
function in the
MargLikArrogance package. After
this is the fastest method available here. The
NSIS when converged and
PMC functions use
NSIS. At least 301 stationary
samples are required, and the number of parameters cannot exceed half
the number of stationary samples.
LML returns a list with two components:
This is an approximation of the logarithm of the marginal
likelihood (LML), which is notoriously difficult to estimate. For this
reason, several methods are provided. The marginal likelihood is
useful when comparing models, such as with Bayes factors in the
This is a variance-covariance matrix, and is the negative inverse of
the Hessian matrix, if estimated. The
Statisticat, LLC. [email protected]
Gelfand, A.E. and Dey, D.K. (1994). "Bayesian Model Choice: Asymptotics and Exact Calculations". Journal of the Royal Statistical Society, Series B 56, p. 501–514.
Kass, R.E. and Raftery, A.E. (1995). "Bayes Factors". Journal of the American Statistical Association, 90(430), p. 773–795.
Lewis, S.M. and Raftery, A.E. (1997). "Estimating Bayes Factors via Posterior Simulation with the Laplace-Metropolis Estimator". Journal of the American Statistical Association, 92, p. 648–655.
Newton, M.A. and Raftery, A.E. (1994). "Approximate Bayesian Inference by the Weighted Likelihood Bootstrap". Journal of the Royal Statistical Society, Series B 3, p. 3–48.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
### If a model object were created and called Fit, then: # ### Applying HME to an object of class demonoid or pmc: #LML(LL=Fit$Deviance*(-1/2), method="HME") # ### Applying LME to an object of class demonoid: #LML(Model, MyData, Modes=apply(Fit$Posterior1, 2, median), method="LME") # ### Applying NSIS to an object of class demonoid #LML(theta=Fit$Posterior1, LL=Fit$Deviance*-(1/2), method="NSIS") # ### Applying LME to an object of class iterquad: #LML(Model, MyData, Modes=Fit$Summary1[,1], method="LME") # ### Applying LME to an object of class laplace: #LML(Model, MyData, Modes=Fit$Summary1[,1], method="LME") # ### Applying LME to an object of class vb: #LML(Model, MyData, Modes=Fit$Summary1[,1], method="LME")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.