mcmc_replica_exchange_mc | R Documentation |
Replica Exchange Monte Carlo
is a Markov chain Monte Carlo (MCMC) algorithm that is also known as Parallel Tempering.
This algorithm performs multiple sampling with different temperatures in parallel,
and exchanges those samplings according to the Metropolis-Hastings criterion.
The K
replicas are parameterized in terms of inverse_temperature
's,
(beta[0], beta[1], ..., beta[K-1])
. If the target distribution has
probability density p(x)
, the kth
replica has density p(x)**beta_k
.
mcmc_replica_exchange_mc( target_log_prob_fn, inverse_temperatures, make_kernel_fn, swap_proposal_fn = tfp$mcmc$replica_exchange_mc$default_swap_proposal_fn(1), state_includes_replicas = FALSE, seed = NULL, name = NULL )
target_log_prob_fn |
Function which takes an argument like
|
inverse_temperatures |
|
make_kernel_fn |
Function which takes target_log_prob_fn and seed args and returns a TransitionKernel instance. |
swap_proposal_fn |
function which take a number of replicas, and return combinations of replicas for exchange. |
state_includes_replicas |
Boolean indicating whether the leftmost dimension
of each state sample should index replicas. If |
seed |
integer to seed the random number generator. |
name |
string prefixed to Ops created by this function.
Default value: |
Typically beta[0] = 1.0
, and 1.0 > beta[1] > beta[2] > ... > 0.0
.
beta[0] == 1
==> First replicas samples from the target density, p
.
beta[k] < 1
, for k = 1, ..., K-1
==> Other replicas sample from
"flattened" versions of p
(peak is less high, valley less low). These
distributions are somewhat closer to a uniform on the support of p
.
Samples from adjacent replicas i
, i + 1
are used as proposals for each
other in a Metropolis step. This allows the lower beta
samples, which
explore less dense areas of p
, to occasionally be used to help the
beta == 1
chain explore new regions of the support.
Samples from replica 0 are returned, and the others are discarded.
list of
next_state
(Tensor or Python list of Tensor
s representing the state(s)
of the Markov chain(s) at each result step. Has same shape as
and current_state
.) and
kernel_results
(collections$namedtuple
of internal calculations used to
'advance the chain).
Other mcmc_kernels:
mcmc_dual_averaging_step_size_adaptation()
,
mcmc_hamiltonian_monte_carlo()
,
mcmc_metropolis_adjusted_langevin_algorithm()
,
mcmc_metropolis_hastings()
,
mcmc_no_u_turn_sampler()
,
mcmc_random_walk_metropolis()
,
mcmc_simple_step_size_adaptation()
,
mcmc_slice_sampler()
,
mcmc_transformed_transition_kernel()
,
mcmc_uncalibrated_hamiltonian_monte_carlo()
,
mcmc_uncalibrated_langevin()
,
mcmc_uncalibrated_random_walk()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.