sample_tmb: Bayesian inference of a TMB model using the no-U-turn...

Description Usage Arguments Details Value Warning Author(s) See Also Examples

View source: R/sample_tmb.R

Description

Draw Bayesian posterior samples from a Template Model Builder (TMB) model using an MCMC algorithm. This function generates posterior samples from which inference can be made. Adaptation schemes are used so specification tuning parameters are not necessary, and parallel execution reduces overall run time.

Usage

1
2
3
4
sample_tmb(obj, iter = 2000, init, chains = 3, seeds = NULL,
  warmup = floor(iter/2), lower = NULL, upper = NULL, thin = 1,
  parallel = FALSE, cores = NULL, path = NULL, algorithm = "NUTS",
  laplace = FALSE, control = NULL, ...)

Arguments

obj

A TMB model object.

iter

The number of samples to draw.

init

A list of lists containing the initial parameter vectors, one for each chain or a function. It is strongly recommended to initialize multiple chains from dispersed points. A of NULL signifies to use the starting values present in the model (i.e., obj$par) for all chains.

chains

The number of chains to run.

seeds

A vector of seeds, one for each chain.

warmup

The number of warmup iterations.

lower

A vector of lower bounds for parameters. Allowed values are -Inf and numeric.

upper

A vector of upper bounds for parameters. Allowed values are Inf and numeric.

thin

The thinning rate to apply to samples. Typically not used with NUTS.

parallel

A boolean for whether to use parallel cores. The package snowfall is used if TRUE.

cores

The number of cores to use for parallel execution.

path

The path to the TMB DLL. This is only required if using parallel, since each core needs to link to the DLL again.

algorithm

The algorithm to use. NUTS is the default and recommended one, but "RWM" for the random walk Metropolis sampler and "HMC" for the static HMC sampler are available. These last two are deprecated but may be of use in some situations. These algorithms require different arguments; see their help files for more information.

laplace

Whether to use the Laplace approximation if some parameters are declared as random. Default is to turn off this functionality and integrate across all parameters with MCMC.

control

A list to control the sampler. See details for further use.

...

Further arguments to be passed to the algorithm. See help files for the samplers for further arguments.

Details

This function implements algorithm 6 of Hoffman and Gelman (2014), and loosely follows package rstan. The step size can be adapted or specified manually. The metric (i.e., mass matrix) can be unit diagonal, adapted diagonal (default and recommended), or a dense matrix specified by the user. Further control of algorithms can be specified with the control argument. Elements are:

adapt_delta

The target acceptance rate.

metric

The mass metric to use. Options are: "unit" for a unit diagonal matrix; "diag" to estimate a diagonal matrix during warmup; a matrix to be used directly (in untransformed space).

adapt_engaged

Whether adaptation of step size and metric is turned on.

max_treedepth

Maximum treedepth for the NUTS algorithm.

stepsize

The stepsize for the NUTS algorithm. If NULL it will be adapted during warmup.

Value

A list containing the samples, and properties of the sampler useful for diagnosing behavior and efficiency.

Warning

The user is responsible for specifying the model properly (priors, starting values, desired parameters fixed, etc.), as well as assessing the convergence and validity of the resulting samples (e.g., through the coda package), or with function launch_shinytmb before making inference. Specifically, priors must be specified in the template file for each parameter. Unspecified priors will be implicitly uniform.

Author(s)

Cole Monnahan

See Also

extract_samples to extract samples and launch_shinytmb to explore the results graphically which is a wrapper for the launch_shinystan function.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
## Build a fake TMB object with objective & gradient functions and some
## other flags
f <- function(x, order=0){
  if(order != 1) # negative log density
    -sum(dnorm(x=x, mean=0, sd=1, log=TRUE))
  else x # gradient of negative log density
}
init <- function() rnorm(2)
obj <- list(env=list(DLL='demo', last.par.best=c(x=init()), f=f,
  beSilent=function() NULL))
## Run NUTS for this object
fit <- sample_tmb(obj, iter=1000, chains=3, init=init)
## Check basic diagnostics
mon <- rstan::monitor(fit$samples, print=FALSE)
Rhat <- mon[,"Rhat"]
max(Rhat)
ess <- mon[, 'n_eff']
min(ess)
## Or do it interactively with ShinyStan
## Not run: 
  launch_shinytmb(fit)
  
## End(Not run)

colemonnahan/adnuts documentation built on Feb. 13, 2018, 4 p.m.