Description Usage Arguments Details Value More details
This function attempts to find a scale vector that will produce a desired acceptance rate for Metropolis sampling of the given probability density funciton. A secondary effect of this function is that it runs the sampler as it does its work, so the parameter set should be in a high-probability region of the parameter space by the time it is finished.
1 2 |
lpost |
Log-posterior function |
p0 |
Starting parameter values for the sampler. |
scl_guess |
An initial guess for the scale vector. If omitted, one will be generated randomly. |
target_accept |
Target acceptance rate. Must be between 0 and 1. |
nsamp |
Number of samples to run each time we evaluate a new scale vector. Larger values make the estimates of the acceptance probability more robust, but also make the calculation take longer. |
This function is completely experimental, and in its current incarnation represents the most naive way possible of accomplishing its goal. Feedback on its performance in real-world problems is welcome.
A metrosamp
structure that (hopefully) is tuned for efficient
sampling.
What we are doing here is using the standard function minimization algorithms
of the optim
function to try to find a scale vector that drives the
difference between the target acceptance and actual acceptance to zero. The
wrinkle in this strategy is that the actual acceptance rate is a stochastic
function of the scale vector, and the minimization algorithms aren't really
set up to deal with that.
In practice, the algorithms seem to do all right at finding their way to something that is reasonably close to the target value (it probably helps that there are likely many such values; probably they form something like an elliptical surface in the scale factor parameter space). However, the stochastic behavior seems to mess up the algorithms' convergence criteria pretty badly. Often the algorithms fly right by a point that achieves the target acceptance exactly and settle on one that is rather far off.
To combat this tendency, we keep track of the best scale vector we've seen so far (in terms of getting close to the target), and we always report that as the result, even if the optimization algorithm actually stopped on something else. (This is similar to, and inspired by, the way that stochastic gradient descent is used in some machine learning applications). Ideally we would do this in the optimization algorithm code, but we don't have easy access to that, so instead we keep track of this in the objective function, and then fish that information out of the objective function's environment once the optimizer is finished. It's a little ugly, but it gets the job done.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.