Markov chain Monte Carlo for equality and inequality constrained Dirichlet distribution using a hit and run algorithm.
1 2 3 4 5 6 7 8 9 
alpha 
parameter vector for Dirichlet distribution. Alternatively,
an object of class 
nbatch 
the number of batches. 
blen 
the length of batches. 
nspac 
the spacing of iterations that contribute to batches. 
a1 
a numeric or character matrix or 
b1 
a numeric or character vector or 
a2 
a numeric or character matrix or 
b2 
a numeric or character vector or 
outmat 
a numeric matrix, which controls the output. If 
debug 
if 
stop.if.implied.equalities 
If 
... 
ignored arguments. Allows the two methods to have different
arguments. You cannot change the Dirichlet parameter or the constraints
(hence cannot change the target distribution) when using the method
for class 
Runs a hit and run algorithm (for which see the references)
producing a Markov chain with equilibrium distribution having a Dirichlet
distribution with parameter vector alpha
constrained to lie in the
subset of the unit simplex consisting of x
satisfying
1 2  a1 %*% x <= b1
a2 %*% x == b2

Hence if a1
is NULL
then so must be b1
, and vice versa,
and similarly for a2
and b2
.
If any of a1
, b1
, a2
, b2
are of type
"character"
, then they must be valid GMP (GNU multiple precision)
rational, that is, if run through q2q
, they do not
give an error. This allows constraints to be represented exactly
(using infinite precision rational arithmetic) if so desired.
See also the section on this subject below.
an object of class "hitrun"
,
which is a list containing at least the following components:
batch 

initial 
initial state of Markov chain. 
final 
final state of Markov chain. 
initial.seed 
value of 
final.seed 
value of 
time 
running time from 
alpha 
the Dirichlet parameter vector. 
nbatch 
the argument 
blen 
the argument 
nspac 
the argument 
outmat 
the argument 
The arguments a1
, b1
, a2
, and b2
can and should be
given as GMP (GNU multiple precision) rational values. This allows the
computational geometry calculations for the constraint set to be done exactly,
without error. For example, if a1
has elements that have been rounded
to two decimal places one should do
1 
and similarly for b1
, a2
, and b2
to make them exact.
For all the conversion functions between ordinary computer numbers and
GMP rational numbers see ConvertGMP. For all the functions that
do arithmetic on GMP rational numbers, see ArithmeticGMP.
If any constraints supplied as inequality constraints (specified by rows
of a1
and the corresponding components of b1
) actually hold
with equality for all points in the constraint set, this is called an
implied equality constraint. The program must establish that none of these
exist (which is a fast operation) or, otherwise, find out which constraints
supplied as inequality constraints are actually implied equality constraints,
and this operation is very slow when the state is high dimensional. One
example with 1000 variables took 3 days of computing time when there were
implied equality constraints in the specification. The same example takes
9 minutes when the same constraint set is specified in a different way so
that there are no implied equality constraints.
This issue is not a big deal if there are only in the low hundreds of
variables, because the algorithm to find implied equality constraints is not
that slow. The same example that takes 3 days of computing time with 1000
variables takes only 15 seconds with 100 variables, 3 and 1/2 minutes with 200
variables, and 23 minutes with 300 variables. As one can see, this issue
does become a big deal as the number of variables increases. Thus users
should avoid implied inequality constraints, if possible,
when there are many variables. Admittedly, there
is no sure way users can identify and eliminate implied equality constraints.
(The sure way to do that is precisely the time consuming step we are trying
to avoid.) The argument stop.if.implied.equalities
can be used to
quickly test for the presence of implied equalities.
This function follows the philosophy of MCMC used in the CRAN package
mcmc
and the introductory chapter of the
Handbook of Markov Chain Monte Carlo (Geyer, 2011).
The hitrun
function automatically does batch means in order to reduce
the size of output and to enable easy calculation of Monte Carlo standard
errors (MCSE), which measure error due to the Monte Carlo sampling (not
error due to statistical sampling — MCSE gets smaller when you run the
computer longer, but statistical sampling variability only gets smaller
when you get a larger data set). All of this is explained in the package
vignette for the mcmc
package (vignette("demo", "mcmc")
)
and in Section 1.10 of Geyer (2011).
The hitrun
function does not apparently
do “burnin” because this concept does not actually help with MCMC
(Geyer, 2011, Section 1.11.4) but the reentrant property of the hitrun
function does allow one to do “burnin” if one wants.
Assuming alpha
, a1
, b1
, a2
, and b2
have been already defined
1 2 
throws away a run of 100 thousand iterations before doing another run of 100 thousand iterations that is actually useful for analysis, for example,
1 2 
gives estimates of posterior means and their MCSE assuming the batch length
(here 1000) was long enough to contain almost all of the signifcant
autocorrelation (see Geyer, 2011, Section 1.10, for more on MCSE).
The reentrant property of the hitrun
function (the second run starts
where the first one stops) assures that this is really “burnin”.
The reentrant property allows one to do very long runs without having to
do them in one invocation of the hitrun
function.
1 2 3 
produces a result as if the first run had been three times as long.
Belisle, C. J. P., Romeijn, H. E. and Smith, R. L. (1993) Hitandrun algorithms for generating multivariate distributions. Mathematics of Operations Research, 18, 255–266.
Chen, M. H. and Schmeiser, B. (1993) Performance of the Gibbs, hitandrun, and Metropolis samplers. Journal of Computational and Graphical Statistics, 2, 251–272.
Geyer, C. J. (2011) Introduction to MCMC. In Handbook of Markov Chain Monte Carlo. Edited by S. P. Brooks, A. E. Gelman, G. L. Jones, and X. L. Meng. Chapman & Hall/CRC, Boca Raton, FL, pp. 3–48.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23  # Bayesian inference for discrete probability distribution on {1, ..., d}
# state is probability vector p of length d
d < 10
x < 1:d
# equality constraints
# mean equal to (d + 1) / 2, that is, sum(x * p) = (d + 1) / 2
# inequality constraints
# median less than or equal to (d + 1) / 2, that is,
# sum(p[x <= (d + 1) / 2]) <= 1 / 2
a2 < rbind(x)
b2 < (d + 1) / 2
a1 < rbind(as.numeric(x <= (d + 1) / 2))
b1 < 1 / 2
# simulate prior, which Dirichlet(alpha)
# posterior would be another Dirichlet with n + alpha  1,
# where n is count of IID data for each value
alpha < rep(2.3, d)
out < hitrun(alpha, nbatch = 30, blen = 250,
a1 = a1, b1 = b1, a2 = a2, b2 = b2)
# prior means
round(colMeans(out$batch), 3)
# Monte Carlo standard errors
round(apply(out$batch, 2, sd) / sqrt(out$nbatch), 3)

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.