Description Usage Arguments Details Value Author(s) References See Also Examples
The LaplacesDemon
function is the main function of Laplace's
Demon. Given data, a model specification, and initial values,
LaplacesDemon
maximizes the logarithm of the unnormalized joint
posterior density with MCMC and provides samples of the marginal
posterior distributions, deviance, and other monitored variables.
The LaplacesDemon.hpc
function extends LaplacesDemon
to
parallel chains for multicore or cluster high performance computing.
1 2 3 4 5 6 7 | LaplacesDemon(Model, Data, Initial.Values, Covar=NULL, Iterations=100000,
Status=1000, Thinning=100, Algorithm="RWM", Specs=NULL, LogFile="",
...)
LaplacesDemon.hpc(Model, Data, Initial.Values, Covar=NULL,
Iterations=100000, Status=1000, Thinning=100, Algorithm="RWM",
Specs=NULL, LogFile="", Chains=2, CPUs=2, Type="PSOCK", Packages=NULL,
Dyn.libs=NULL)
|
Model |
This required argument receives the model from a
user-defined function that must be named Model. The user-defined
function is where the model is specified. |
Data |
This required argument accepts a list of data. The list of
data must contain |
Initial.Values |
For |
Covar |
This argument defaults to |
Iterations |
This required argument accepts integers larger than
10, and determines the number of iterations that Laplace's Demon
will update the parameters while searching for target
distributions. The required amount of computer memory will increase
with |
Status |
This argument accepts integers between 1 and the number of
iterations, and indicates how often the user would like the status
of the number of iterations and proposal type (for example,
multivariate or componentwise, or mixture, or subset) printed to
the screen. For example, if a model is updated for 1,000 iterations
and |
Thinning |
This argument accepts integers between 1 and the
number of iterations, and indicates that every nth iteration will be
retained, while the other iterations are discarded. If
|
Algorithm |
This argument accepts the abbreviated name of the MCMC algorithm, which must appear in quotes. A list of MCMC algorithms appears below in the Details section, and the abbreviated name is in parenthesis. |
Specs |
This argument defaults to |
LogFile |
This argument is used to specify a log file name in
quotes in the working directory as a destination, rather than the
console, for the output messages of |
Chains |
This argument is required only for
|
CPUs |
This argument is required for parallel independent or
interactive chains in |
Type |
This argument defaults to |
Packages |
This optional argument is for use with parallel
independent or interacting chains, and defaults to |
Dyn.libs |
This optional argument is for use with parallel
independent or interacting chain, and defaults to |
... |
Additional arguments are unused. |
LaplacesDemon
offers numerous MCMC algorithms for numerical
approximation in Bayesian inference. The algorithms are
Adaptive Directional Metropolis-within-Gibbs (ADMG)
Adaptive Griddy-Gibbs (AGG)
Adaptive Hamiltonian Monte Carlo (AHMC)
Adaptive Metropolis (AM)
Adaptive Metropolis-within-Gibbs (AMWG)
Adaptive-Mixture Metropolis (AMM)
Affine-Invariant Ensemble Sampler (AIEC)
Componentwise Hit-And-Run Metropolis (CHARM)
Delayed Rejection Adaptive Metropolis (DRAM)
Delayed Rejection Metropolis (DRM)
Differential Evolution Markov Chain (DEMC)
Elliptical Slice Sampling (ESS)
Griddy-Gibbs (GG)
Hamiltonian Monte Carlo (HMC)
Hamiltonian Monte Carlo with Dual-Averaging (HMCDA)
Hit-And-Run Metropolis (HARM)
Independence Metropolis (IM)
Interchain Adaptation (INCA)
Metropolis-Adjusted Langevin Algorithm (MALA)
Metropolis-within-Gibbs (MWG)
No-U-Turn Sampler (NUTS)
Random-Walk Metropolis (RWM)
Reversible-Jump (RJ)
Robust Adaptive Metropolis (RAM)
Sequential Adaptive Metropolis-within-Gibbs (SAMWG)
Sequential Metropolis-within-Gibbs (SMWG)
Slice Sampler (Slice)
Stochastic Gradient Langevin Dynamics (SGLD)
Tempered Hamiltonian Monte Carlo (THMC)
t-walk (twalk)
Updating Sequential Adaptive Metropolis-within-Gibbs (USAMWG)
Updating Sequential Metropolis-within-Gibbs (USMWG)
It is a goal for the documentation in the LaplacesDemon to be
extensive. However, details of MCMC algorithms are best explored
online at http://www.bayesian-inference.com/mcmc, as well
as in the "LaplacesDemon Tutorial" vignette, and the "Bayesian
Inference" vignette. Algorithm specifications (Specs
) are
listed below:
A
is the number of initial, adaptive iterations to be
discarded as burn-in, and is used in HMCDA and NUTS.
Adaptive
is the iteration in which adaptation begins,
and is used in AM, AMM, DRAM, and INCA. These algorithms adapt
according to an observed covariance matrix, and should sample before
beginning to adapt.
alpha.star
is the desired acceptance rate(s) in RAM,
and is optional in CHARM and HARM. It is a scalar in CHARM or HARM,
and in RAM it may be a scalar or a vector equal in length to the
number of targets. The recommended value for multivariate proposals
is alpha.star=0.234
, or for componentwise proposals it is
alpha.star=0.44
.
at
affects the traverse move in twalk. at=6
is
recommended. It helps when some parameters are highly correlated,
and the correlation structure may change through the
state-space. The traverse move is associated with an acceptance rate
that decreases as the number of parameters increases, and is the
reason that n1
is used to select a subset of parameters each
iteration. If adjusted, it is recommended to stay in the interval
[2,10].
aw
affects the walk move in twalk, and aw=1.5
is
recommended. If adjusted, it is recommended to stay in the
interval [0.3,2].
beta
is a scale parameter for AIES, and defaults to 2.
bin.n
is the scalar size parameter for a binomial prior
distribution of model size for the RJ algorithm.
bin.p
is the scalar probability parameter for a
binomial prior distribution of model size for the RJ algorithm.
B
is a list of blocked parameters. Each component of
the list represents a block of parameters, and contains a vector in
which each element is the position of the associated parameter in
parm.names. This function is optional in the AMM, ESS, HARM, and RWM
algorithms. For more information on blockwise sampling, see the
Blocks
function.
Begin
indicates the time-period in which to begin
updating (filtering or predicting) in the USAMWG and USMWG
algorithms.
delta
is the target acceptance rate in HMCDA and
NUTS. The recommended value is 0.65 in HMCDA and 0.6 in NUTS.
Dist
is the proposal distribution in RAM, and may
either be Dist="t"
for t-distributed or Dist="N"
for
normally-distributed.
dparm
accepts a vector of integers that indicate
discrete parameters. This argument is for use with the AGG or GG
algorithm.
Dyn
is a T x K matrix of dynamic
parameters, where T is the number of time-periods and K
is the number of dynamic parameters. Dyn
is used by SAMWG,
SMWG, USAMWG, and USMWG. Non-dynamic parameters are updated first in
each sampler iteration, then dynamic parameters are updated in a
random order in each time-period, and sequentially by time-period.
epsilon
is the step-size in AHMC, HMC, HMCDA, NUTS,
SGLD, and THMC. It is a vector equal in length to the number of
parameters in AHMC, HMC, and THMC. It is a scalar in HMCDA and
NUTS. It is either a scalar or a vector equal in length to the
number of iterations in SGLD. When epsilon=NULL
in HMCDA or
NUTS (only), a reasonable initial value is found.
file
is the quoted name of a numeric matrix of data,
without headers, for SGLD. The big data set must be a .csv
file. This matrix has Nr
rows and Nc
columns. Each
iteration, SGLD will randomly select a block of rows, where the
number of rows is specified by the size
argument.
Fit
is an object of class demonoid
in the USAMWG
and USMWG algorithms. Posterior samples before the time-period
specified in the Begin
argument are not updated, and are used
instead from Fit
.
gamma
controls the step size in DEMC or the decay of
adaptation in RAM. In DEMC, it is positive and defaults to
2.38/sqrt(2J) when NULL
, where
J is the length of initial values. For RAM, it is in the
interval (0.5,1], and 0.66 is recommended.
Grid
accepts either a vector or a list of vectors of
evenly-spaced points on a grid for the AGG or GG algorithm. When the
argument is a vector, the same grid is applied to all
parameters. When the argument is a list, each component in the list
has a grid that is applied to the corresponding parameter. The
algorithm will evaluate each continuous parameter at the latest
value plus each point in the grid, or each discrete parameter (see
dparm
) at each grid point (which should be each discrete
value).
L
is a scalar number of leapfrog steps in AHMC, HMC, and
THMC. When L=1
, the algorithm reduces to Langevin Monte Carlo
(LMC).
lambda
is a scalar trajectory length in HMCDA.
Lmax
is a scalar maximum for L
(see above) in
HMCDA.
m
is a scalar (or vector equal in length to the number
of initial values) integer in [1,Inf] in which each element
indicates the maximum number of steps for creating the slice
interval. It is used by the Slice algorithm, and defaults to
infinity.
mu
is a vector that is equal in length to the initial
values. This vector will be used as the mean of the proposal
distribution, and is usually the posterior mode of a
previously-updated LaplaceApproximation
.
Nc
is either the number of (un-parallelized) parallel
chains in DEMC (and must be at least 3) or the number of columns of
big data in SGLD.
Nr
is the number of rows of big data in SGLD.
n1
affects the size of the subset of each set of points
to adjust, and is used in twalk. It relates to the number of
parameters, and n1=4
is recommended. If adjusted, it is
recommended to stay in the interval [2,20].
parm.p
is a vector of probabilities for parameter
selection in the RJ algorithm, and must be equal in length to
the number of initial values.
Periodicity
specifies how often in iterations the
adaptive algorithm should adapt, and is used by AHMC, AM, AMM, AMWG,
DRAM, INCA, RAM, SAMWG, and USAMWG. If Periodicity=10
,
then the algorithm adapts every 10th iteration. A higher
Periodicity
is associated with an algorithm that runs faster,
because it does not have to calculate adaptation as often, though
the algorithm adapts less often to the target distributions, so it
is a trade-off. It is recommended to use the lowest value that runs
fast enough to suit the user, or provide sufficient adaptation.
selectable
is a vector of indicators of whether or not
a parameter is selectable for variable selection in the RJ
algorithm. Non-selectable parameters are assigned a zero, and are
always in the model. Selectable parameters are assigned a one. This
vector must be equal in length to the number of initial values.
selected
is a vector of indicators of whether or not
each parameter is selected when the RJ algorithm begins, and
must be equal in length to the number of initial values.
SIV
stands for secondary initial values and is used by
twalk. SIV
must be the same length as Initial.Values
,
and each element of these two vectors must be unique from each
other, both before and after being passed to the Model
function. SIV
defaults to NULL
, in which case values
are generated with GIV
.
size
is the number of rows of big data to be read into
SGLD each iteration.
smax
is the maximum allowable tuning parameter sigma,
the standard deviation of the conditional distribution, in the AGG
algorithm.
Temperature
is used in the THMC algorithm to heat up
the momentum in the first half of the leapfrog steps, and then cool
down the momentum in the last half. Temperature
must be
positive. When greater than 1, THMC should explore more diffuse
distributions, and may be helpful with multimodal distributions.
w
is used in AMM, DEMC, and Slice. It is a mixture
weight for both the AMM and DEMC algorithms, and in these algorithms
it is in the interval (0,1]. For AMM, it is recommended to use
w=0.05
, as per Roberts and Rosenthal (2009). The two mixture
components in AMM are adaptive multivariate and static/symmetric
univariate proposals. The mixture is determined at each iteration
with mixture weight w
. In the AMM algorithm, a higher value
of w
is associated with more static/symmetric univariate
proposals, and a lower w
is associated with more adaptive
multivariate proposals. AMM will be unable to include the
multivariate mixture component until it has accumulated some
history, and models with more parameters will take longer to be
able to use adaptive multivariate proposals. In DEMC, it indicates
the probability that each iteration uses a snooker update, rather
than a projection update, and the recommended default is
w=0.1
. In the Slice algorithm, this argument may be a
scalar or a vector equal in length the number of initial values, and
each element indicates the step size used for creating the slice
interval.
Z
accepts a T x J matrix or T x J x Nc array of thinned samples for T
thinned iterations, J parameters, and Nc chains for
DEMC. Z
defaults to NULL
. The matrix of thinned
posterior samples from a previous run may be used, in which case the
samples are copied across the chains.
LaplacesDemon
returns an object of class demonoid
, and
LaplacesDemon.hpc
returns an object of class
demonoid.hpc
that is a list of objects of class
demonoid
, where the number of components in the list
is the number of parallel chains. Each object of class demonoid
is a list with the following components:
Acceptance.Rate |
This is the acceptance rate of the MCMC
algorithm, indicating the percentage of iterations in which the
proposals were accepted. For more information on acceptance rates,
see the |
Algorithm |
This reports the specific algorithm used. |
Call |
This is the matched call of |
Covar |
This stores the K x K proposal
covariance matrix (where K is the dimension or number of
parameters), variance vector, or list of covariance matrices.
If variance or covariance is used for adaptation, then this
covariance is returned. Otherwise, the variance of the samples of
each parameter is returned. If the model is updated in the future,
then this vector, matrix, or list can be used to start the next
update where the last update left off. Only the diagonal of this
matrix is reported in the associated |
CovarDHis |
This N x K matrix stores the diagonal of the proposal covariance matrix of each adaptation in each of N rows for K dimensions, where the dimension is the number of parameters or length of the initial values vector. The proposal covariance matrix should change less over time. An exception is that the AHMC algorithm stores an algorithm specification here, which is not the diagonal of the proposal covariance matrix. |
Deviance |
This is a vector of the deviance of the model, with a length equal to the number of thinned samples that were retained. Deviance is useful for considering model fit, and is equal to the sum of the log-likelihood for all rows in the data set, which is then multiplied by negative two. |
DIC1 |
This is a vector of three values: Dbar, pD, and DIC. Dbar
is the mean deviance, pD is a measure of model complexity indicating
the effective number of parameters, and DIC is the Deviance
Information Criterion, which is a model fit statistic that is the
sum of Dbar and pD. |
DIC2 |
This is identical to |
Initial.Values |
This is the vector of |
Iterations |
This reports the number of |
LML |
This is an approximation of the logarithm of the marginal
likelihood of the data (see the |
Minutes |
This indicates the number of minutes that
|
Model |
This contains the model specification |
Monitor |
This is a vector or matrix of one or more monitored
variables, which are variables that were specified in the
|
Parameters |
This reports the number of parameters. |
Posterior1 |
This is a matrix of marginal posterior distributions composed of thinned samples, with a number of rows equal to the number of thinned samples and a number of columns equal to the number of parameters. This matrix includes all thinned samples. |
Posterior2 |
This is a matrix equal to |
Rec.BurnIn.Thinned |
This is the recommended burn-in for the
thinned samples, where the value indicates the first row that was
stationary across all parameters, and previous rows are discarded
as burn-in. Samples considered as burn-in are discarded because they
do not represent the target distribution and have not adequately
forgotten the initial value of the chain (or Markov chain, if
|
Rec.BurnIn.UnThinned |
This is the recommended burn-in for all samples, in case thinning will not be necessary. |
Rec.Thinning |
This is the recommended value for the
|
Specs |
This is an optional list of algorithm specifications. |
Status |
This is the value in the |
Summary1 |
This is a matrix that summarizes the marginal
posterior distributions of the parameters, deviance, and monitored
variables over all samples in |
Summary2 |
This matrix is identical to the matrix in
|
Thinned.Samples |
This is the number of thinned samples that were retained. |
Thinning |
This is the value of the |
Statisticat, LLC software@bayesian-inference.com, Silvere Vialet-Chabrand silvere@vialet-chabrand.com
Atchade, Y.F. (2006). "An Adaptive Version for the Metropolis Adjusted Langevin Algorithm with a Truncated Drift". Methodology and Computing in Applied Probability, 8, p. 235–254.
Bai, Y. (2009). "An Adaptive Directional Metropolis-within-Gibbs Algorithm". Technical Report in Department of Statistics at the University of Toronto.
Craiu, R.V., Rosenthal, J., and Yang, C. (2009). "Learn From Thy Neighbor: Parallel-Chain and Regional Adaptive MCMC". Journal of the American Statistical Assocation, 104(488), p. 1454–1466.
Christen, J.A. and Fox, C. (2010). "A General Purpose Sampling Algorithm for Continuous Distributions (the t-walk)". Bayesian Analysis, 5(2), p. 263–282.
Duane, S., Kennedy, A.D., Pendleton, B.J., and Roweth, D. (1987). "Hybrid Monte Carlo". Physics Letters, B, 195, p. 216–222.
Gelman, A., Carlin, J., Stern, H., and Rubin, D. (2004). "Bayesian Data Analysis, Texts in Statistical Science, 2nd ed.". Chapman and Hall, London.
Goodman J, and Weare, J. (2010). "Ensemble Samplers with Affine Invariance". Communications in Applied Mathematics and Computational Science, 5(1), p. 65–80.
Green, P.J. (1995). "Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination". Biometrika, 82, p. 711–732.
Haario, H., Laine, M., Mira, A., and Saksman, E. (2006). "DRAM: Efficient Adaptive MCMC". Statistical Computing, 16, p. 339–354.
Haario, H., Saksman, E., and Tamminen, J. (2001). "An Adaptive Metropolis Algorithm". Bernoulli, 7, p. 223–242.
Hoffman, M.D. and Gelman. A. (2012). "The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo". Journal of Machine Learning Research, p. 1–30.
Kass, R.E. and Raftery, A.E. (1995). "Bayes Factors". Journal of the American Statistical Association, 90(430), p. 773–795.
Lewis, S.M. and Raftery, A.E. (1997). "Estimating Bayes Factors via Posterior Simulation with the Laplace-Metropolis Estimator". Journal of the American Statistical Association, 92, p. 648–655.
Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., and Teller, E. (1953). "Equation of State Calculations by Fast Computing Machines". Journal of Chemical Physics, 21, p. 1087–1092.
Mira, A. (2001). "On Metropolis-Hastings Algorithms with Delayed Rejection". Metron, Vol. LIX, n. 3-4, p. 231–241.
Murray, I., Adams, R.P., and MacKay, D.J. (2010). "Elliptical Slice Sampling". Journal of Machine Learning Research, 9, p. 541–548.
Neal, R.M. (2003). "Slice Sampling" (with discussion). Annals of Statistics, 31(3), p. 705–767.
Ritter, C. and Tanner, M. (1992), "Facilitating the Gibbs Sampler: the Gibbs Stopper and the Griddy-Gibbs Sampler", Journal of the American Statistical Association, 87, p. 861–868.
Roberts, G.O. and Rosenthal, J.S. (2009). "Examples of Adaptive MCMC". Computational Statistics and Data Analysis, 18, p. 349–367.
Roberts, G.O. and Tweedie, R.L. (1996). "Exponential Convergence of Langevin Distributions and Their Discrete Approximations". Bernoulli, 2(4), p. 341–363.
Rosenthal, J.S. (2007). "AMCMC: An R interface for adaptive MCMC". Computational Statistics and Data Analysis, 51, p. 5467–5470.
Smith, R.L. (1984). "Efficient Monte Carlo Procedures for Generating Points Uniformly Distributed Over Bounded Region". Operations Research, 32, p. 1296–1308.
Ter Braak, C.J.F. and Vrugt, J.A. (2008). "Differential Evolution Markov Chain with Snooker Updater and Fewer Chains", Statistics and Computing, 18(4), p. 435–446.
Vihola, M. (2011). "Robust Adaptive Metropolis Algorithm with Coerced Acceptance Rate". Statistics and Computing. Springer, Netherlands.
Welling, M. and Teh, Y.W. (2011). "Bayesian Learning via Stochastic Gradient Langevin Dynamics". Proceedings of the 28th International Conference on Machine Learning (ICML), p. 681–688.
AcceptanceRate
,
as.initial.values
,
as.parm.names
,
BayesFactor
,
Blocks
,
BMK.Diagnostic
,
Combine
,
Consort
,
ESS
,
GIV
,
is.data
,
is.model
,
IterativeQuadrature
,
LaplaceApproximation
,
LaplacesDemon.RAM
,
LML
, and
MCSE
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | # The accompanying Examples vignette is a compendium of examples.
#################### Load the LaplacesDemon Library #####################
library(LaplacesDemon)
############################## Demon Data ###############################
data(demonsnacks)
y <- log(demonsnacks$Calories)
X <- cbind(1, as.matrix(log(demonsnacks[,c(1,4,10)]+1)))
J <- ncol(X)
for (j in 2:J) X[,j] <- CenterScale(X[,j])
mon.names <- "LP"
parm.names <- as.parm.names(list(beta=rep(0,J), sigma=0))
pos.beta <- grep("beta", parm.names)
pos.sigma <- grep("sigma", parm.names)
PGF <- function(Data) return(c(rnormv(Data$J,0,10), rhalfcauchy(1,5)))
MyData <- list(J=J, PGF=PGF, X=X, mon.names=mon.names,
parm.names=parm.names, pos.beta=pos.beta, pos.sigma=pos.sigma, y=y)
########################## Model Specification ##########################
Model <- function(parm, Data)
{
### Parameters
beta <- parm[Data$pos.beta]
sigma <- interval(parm[Data$pos.sigma], 1e-100, Inf)
parm[Data$pos.sigma] <- sigma
### Log of Prior Densities
beta.prior <- sum(dnormv(beta, 0, 1000, log=TRUE))
sigma.prior <- dhalfcauchy(sigma, 25, log=TRUE)
### Log-Likelihood
mu <- tcrossprod(Data$X, t(beta))
LL <- sum(dnorm(Data$y, mu, sigma, log=TRUE))
### Log-Posterior
LP <- LL + beta.prior + sigma.prior
Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP,
yhat=rnorm(length(mu), mu, sigma), parm=parm)
return(Modelout)
}
set.seed(666)
############################ Initial Values #############################
Initial.Values <- GIV(Model, MyData, PGF=TRUE)
###########################################################################
# Examples of MCMC Algorithms #
###########################################################################
######################## Hit-And-Run Metropolis #########################
Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
Covar=NULL, Iterations=1000, Status=100, Thinning=1,
Algorithm="HARM", Specs=NULL)
Fit
print(Fit)
#Consort(Fit)
#plot(BMK.Diagnostic(Fit))
#PosteriorChecks(Fit)
#caterpillar.plot(Fit, Parms="beta")
#BurnIn <- Fit$Rec.BurnIn.Thinned
#plot(Fit, BurnIn, MyData, PDF=FALSE)
#Pred <- predict(Fit, Model, MyData, CPUs=1)
#summary(Pred, Discrep="Chi-Square")
#plot(Pred, Style="Covariates", Data=MyData)
#plot(Pred, Style="Density", Rows=1:9)
#plot(Pred, Style="ECDF")
#plot(Pred, Style="Fitted")
#plot(Pred, Style="Jarque-Bera")
#plot(Pred, Style="Predictive Quantiles")
#plot(Pred, Style="Residual Density")
#plot(Pred, Style="Residuals")
#Levene.Test(Pred)
#Importance(Fit, Model, MyData, Discrep="Chi-Square")
############# Adaptive Directional Metropolis-within-Gibbs ##############
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="ADMG", Specs=list(Periodicity=1))
######################## Adaptive Griddy-Gibbs ##########################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AGG", Specs=list(Grid=GaussHermiteQuadRule(3)$nodes,
# dparm=NULL, smax=Inf, CPUs=1, Packages=NULL, Dyn.libs=NULL))
################## Adaptive Hamiltonian Monte Carlo #####################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AHMC", Specs=list(epsilon=rep(0.02, length(Initial.Values)),
# L=2, Periodicity=10))
########################## Adaptive Metropolis ##########################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AM", Specs=list(Adaptive=500, Periodicity=10))
################### Adaptive Metropolis-within-Gibbs ####################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AMWG", Specs=list(Periodicity=50))
###################### Adaptive-Mixture Metropolis ######################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AMM", Specs=list(Adaptive=500, B=NULL, Periodicity=10,
# w=0.05))
################### Affine-Invariant Ensemble Sampler ###################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="AIES", Specs=list(Nc=2*length(Initial.Values), Z=NULL,
# beta=2, CPUs=1, Packages=NULL, Dyn.libs=NULL))
################# Componentwise Hit-And-Run Metropolis ##################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="CHARM", Specs=NULL)
########### Componentwise Hit-And-Run (Adaptive) Metropolis #############
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="CHARM", Specs=list(alpha.star=0.44))
################# Delayed Rejection Adaptive Metropolis #################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="DRAM", Specs=list(Adaptive=500, Periodicity=10))
##################### Delayed Rejection Metropolis ######################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="DRM", Specs=NULL)
################## Differential Evolution Markov Chain ##################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="DEMC", Specs=list(Nc=3, Z=NULL, gamma=NULL, w=0.1))
####################### Elliptical Slice Sampling #######################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="ESS", Specs=list(B=NULL))
############################# Griddy-Gibbs ##############################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="GG", Specs=list(Grid=seq(from=-0.1, to=0.1, len=5),
# dparm=NULL, CPUs=1, Packages=NULL, Dyn.libs=NULL))
####################### Hamiltonian Monte Carlo #########################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="HMC", Specs=list(epsilon=rep(0.02, length(Initial.Values)),
# L=2))
############# Hamiltonian Monte Carlo with Dual-Averaging ###############
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="HMCDA", Specs=list(A=500, delta=0.65, epsilon=NULL,
# Lmax=1000, lambda=0.1))
################## Hit-And-Run (Adaptive) Metropolis ####################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="HARM", Specs=list(alpha.star=0.234, B=NULL))
######################## Independence Metropolis ########################
### Note: the mu and Covar arguments are populated from a previous Laplace
### Approximation.
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=Fit$Covar, Iterations=1000, Status=100, Thinning=1,
# Algorithm="IM",
# Specs=list(mu=Fit$Summary1[1:length(Initial.Values),1]))
######################### Interchain Adaptation #########################
#Initial.Values <- rbind(Initial.Values, GIV(Model, MyData, PGF=TRUE))
#Fit <- LaplacesDemon.hpc(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="INCA", Specs=list(Adaptive=500, Periodicity=10),
# LogFile="MyLog", Chains=2, CPUs=2, Type="PSOCK", Packages=NULL,
# Dyn.libs=NULL)
################ Metropolis-Adjusted Langevin Algorithm #################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="MALA", Specs=list(Periodicity=1))
####################### Metropolis-within-Gibbs #########################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="MWG", Specs=NULL)
########################## No-U-Turn Sampler ############################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=100, Status=10, Thinning=1,
# Algorithm="NUTS", Specs=list(A=50, delta=0.6, epsilon=NULL))
###################### Robust Adaptive Metropolis #######################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="RAM", Specs=list(alpha.star=0.234, Dist="N", gamma=0.66,
# Periodicity=1))
########################### Reversible-Jump #############################
#bin.n <- J-1
#bin.p <- 0.2
#parm.p <- c(1, rep(1/(J-1),(J-1)), 1)
#selectable <- c(0, rep(1,J-1), 0)
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="RJ", Specs=list(bin.n=bin.n, bin.p=bin.p,
# parm.p=parm.p, selectable=selectable,
# selected=c(0,rep(1,J-1),0)))
######################## Random-Walk Metropolis #########################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="RWM", Specs=NULL)
############## Sequential Adaptive Metropolis-within-Gibbs ##############
#NOTE: The SAMWG algorithm is only for state-space models (SSMs)
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="SAMWG", Specs=list(Dyn=Dyn, Periodicity=50))
################## Sequential Metropolis-within-Gibbs ###################
#NOTE: The SMWG algorithm is only for state-space models (SSMs)
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="SMWG", Specs=list(Dyn=Dyn))
############################# Slice Sampler #############################
#m <- Inf; w <- 1
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="Slice", Specs=list(m=m, w=w))
################# Stochastic Gradient Langevin Dynamics #################
#NOTE: The Data and Model functions must be coded differently for SGLD.
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=10, Thinning=10,
# Algorithm="SGLD", Specs=list(epsilon=1e-4, file="X.csv", Nr=1e4,
# Nc=6, size=10))
################### Tempered Hamiltonian Monte Carlo ####################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="THMC", Specs=list(epsilon=rep(0.05,length(Initial.Values)),
# L=2, Temperature=2))
############################### t-walk #################################
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=1000, Status=100, Thinning=1,
# Algorithm="twalk", Specs=list(SIV=NULL, n1=4, at=6, aw=1.5))
########## Updating Sequential Adaptive Metropolis-within-Gibbs #########
#NOTE: The USAMWG algorithm is only for state-space model updating
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=100000, Status=100, Thinning=100,
# Algorithm="USAMWG", Specs=list(Dyn=Dyn, Periodicity=50, Fit=Fit,
# Begin=T.m))
############## Updating Sequential Metropolis-within-Gibbs ##############
#NOTE: The USMWG algorithm is only for state-space model updating
#Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values,
# Covar=NULL, Iterations=100000, Status=100, Thinning=100,
# Algorithm="USMWG", Specs=list(Dyn=Dyn, Fit=Fit, Begin=T.m))
#End
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.