knitr::opts_chunk$set(
  error = FALSE,
  collapse = TRUE,
  comment = "#>",
  fig.width = 7,
  fig.height = 5)
lang_output <- function(x, lang) {
  cat(c(sprintf("```%s", lang), x, "```"), sep = "\n")
}
cc_output <- function(x) lang_output(x, "cc")
r_output <- function(x) lang_output(x, "r")
plain_output <- function(x) lang_output(x, "plain")
set.seed(1)

Fundamentally, using a computer to create a realisation from stochastic Monte Carlo models is extremely simple. Consider a random walk in one dimension - we might write that in base R functions by creating a function that takes a current state state and a list of parameters:

update_walk <- function(state, pars) {
  rnorm(1, state, pars$sd)
}

and then iterating it for 20 time steps with:

y <- 0
pars <- list(sd = 2)
for (i in 1:20) {
  y <- update_walk(y, pars)
}

At the end of this process, the variable y contains a new value, corresponding to 20 time steps with our stochastic update function.

So why does dust apparently require thousands of lines of code to do this?

Running multiple realisations

It's very rare that one might want to run a single stochastic simulation; normally we want to run a group together. There are several ways that we might want to do that:

There book-keeping for this can get tedious and error prone if done by hand. In dust, we try and restrict concern about this to a few points, and for the simulation itself -- the interaction that we expect to take the longest in any interesting model -- we just run a big loop over time and all particles no matter what type of structure they might represent from the above.

See vignette("multi") for details of interacting with different ways that you might want to structure your simulations.

Parallelisation

Once we're running multiple simulations at once, even a simple simulation might start taking a long time and because they are independent we might look to parallelism to try and speed up the simulations.

However, one cannot just draw multiple numbers from a single random number generator at once. That is, given a generator like those built into R, there is no parallel equivalent to

runif(10)

that would draw the 10 numbers in parallel rather than in series. When drawing a random number there is a "side effect" of updating the random number state. That is because the random number stream is also a Markov chain!

As such it makes sense (to us at least) to store the state of each stream's random number generator separately, so if we have n particles within a dust object we have n separate streams, and we might think of the model state as being the state that is declared by the user as a vector of floating point numbers alongside the random number state. During each model step, the model state is updated and so is the random number state.

This might seem wasteful, and if we used the popular Mersenne Twister it would be to some degree as each particle would require 2560 bytes of additional state. In contrast the newer xoshiro generators that we use require only 32 or 16 bytes of state; the same as 4 double- or single-precision floating point numbers respectively. So for any nontrivial simulation it's not a very large overhead.

Setting the seed for these runs is not trivial, particularly as the number of simultaneous particles increase. If you've used random numbers with the future package you may have seen it raise a warning if you do not configure it to use a "L'Ecuyer-CMRG" which adapts R's native random number seeds to be safe in parallel.

The reason for this is that if different streams start from seeds that are set via poor heuristics (e.g., system time and thread id) they might be exactly the same. If they were set randomly, then they might collide (see John Cook's description of the birthday paradox here) and if they are picked sequentially there's no guarantee that these streams might not be correlated.

Ideally we want a similar set of properties to R's set.seed method; the user provides an arbitrary integer and we seed all the random number streams using this in a way that is reproducible and also statistically robust. We also want the streams to be reproducible even when the number of particles changes, for particle indices that are shared. The random number generators we use (the xoshiro family, a.k.a. Blackmann-Vigna generators) support these properties and are described more fully in vignette("rng").

To initialise our system with a potentially very large number of particles we take two steps:

With this setup we are free to parallelise the system as each realisation is completely independent of each other; the problem has become "embarrassingly parallel". In practice we do this using OpenMP where available as this is well supported from R and gracefully falls back on serial operation where not available. See dust::dust_openmp_support for information on your system's OpenMP configuration as seen by R:

dust::dust_openmp_support()

As the number of threads changes, the results will not change; the same calculations will be carried out and the same random numbers drawn. The number of threads used can even be changed for a model while it is running if the computational resources available change during a model run, using the $set_n_threads() method.

Sometimes we might parallelise beyond one computer (e.g., when using a cluster), in which case we cannot use OpenMP. We call this case "distributed parallelism" and cope by having each process take a "long jump" (an even larger jump in the random number space), then within the process proceed as above. This is the approach taken in our mcstate package for organising running MCMC chains in parallel, each of which works with a dust model.

The properties of the random number generator are discussed further in vignette("rng").

Efficient running

A general rule-of-thumb is to avoid unneeded memory allocations in tight loops; with this sort of stochastic iteration everything is a tight loop! However, we've reduced the problem scope to just providing an update method, and as long as that does not issue memory allocations then the whole thing runs in fixed space without having to worry.

Efficient state handling

For nontrivial systems, we often want to record a subset of states - potentially a very small fraction of the total states computed. For example, in our sircovid model we track several thousand states (representing populations in various stages of disease transmission, in different ages, with different vaccination status etc), but most of the time we only need to report on a few tens of these in order to fit to data or to examine key outputs.

Reducing the number of state variables returned at different points in the process has several advantages:

To enable this, you can restrict the state returned by most methods; some by default and others when you call them.

In both cases, if index was named then the returned state carries these names as its rownames.

The ordering of the state is important; we always have dimensions that will contain:

  1. the model states within a single particle
  2. the particles within a time-step (may be several dimensions; see vignette("multi"))
  3. the time dimension if using simulate

This is to minimise repeatedly moving around data during writing, and to help with concatenation. Multiple particles data is stored consecutively and read and written in order. Each time step is written at once. And you can append states from different times easily. The base-R aperm() function will be useful for reshaping this output to a different dimension order if you require one, but it can be very slow.

In order to pull all of this off, we allocate all our memory up front, in C++ and pass back to R a "pointer" to this memory, which will live for as long as your model object. This means that even if your model requires GBs of memory to run, it is never copied back and forth into R (where it would be subject to R's copy-on-write semantics but instead accessed only when needed, and written to in place following C++ reference semantics.

Useful verbs

We try and provide verbs that are useful, given that the model presents a largely opaque pointer to model state. These are driven by our needs for running a particle filter.

Normally we have in the object several things:

The internal state is the the hardest to understand in this set. Suppose that we had a model that each time step we wanted to do something like take the median value found in a set of random number draws. We might want to write the update function like

  struct internal_type {
    std::vector<real_type> samples;
  };
  // ...
  void update(size_t time, const real_type * state, rng_state_type& rng_state,
              real_type * state_next) {
    for (size_t i = 0; i < shared->n; ++i) {
      internal.samples[i] = dust::random::uniform(rng_state, 0, 1)
    }
    state_next[0] = median(samples);
  }

with median defined as something like

template <typename T, typename U>
T median(U& v) {
  const size_t m = v.size() / 2;
  std::nth_element(u.begin(), u.begin() + m, u.end());
  return v[m];
}

this takes advantage of some internal space of the correct size in internal memory. This might be configured with

dust::pars_type<model> dust_pars<model>(cpp11::list pars) {
  using real_type = typename model::real_type;
  auto shared = std::make_shared<model::shared_type>();
  shared->n = 10;
  model::internal_type internal{std::vector<model::real_type>(shared->n)};
  return dust::pars_type<model>(shared, internal);
}

There is one additional subtlety about internal state: we assume that the state entirely specifies a model in a Markov process, and so we don't guarantee that models with mutable internal state will not be discarded between each iteration. Above, samples is configured in the dust_pars method (so allocated there), and is used in update, but it should not be read from within the update method before it is written to, because it might contain some other particle's scratch space.

The reason why this is important is because if we reorder particles what we really do is reorder the state vector and not this internal state. This prevents implementing things like models with "delays" in the current design. We may relax constraint this if it is needed.

Given this, the sorts of verbs that we need include:

In addition, we have more specific methods oriented towards particle filtering:

A compilation target

The most esoteric design of dust is to make it convenient to use as a target for other programs. We use the package primarily as a target for models written in odin via odin.dust. This allows the user to write models at a very high level, describing the updates between steps. The random walk example at the beginning of this document might be implemented as

sd <- user()              # user-provided standard deviation
initial(y) <- 0           # starting point of the simulation
update(y) <- runif(y, sd) # take random step each time step

which will compile a dust model:

// [[dust::class(odin)]]
// [[dust::param(sd, has_default = FALSE, default_value = NULL, rank = 0, min = -Inf, max = Inf, integer = FALSE)]]
class odin {
public:
  using real_type = typename model::real_type;
  using rng_state_type = dust::random::generator<real_type> rng_state_type;
  using data_type = dust::no_data;
  struct shared_type {
    real_type initial_y;
    real_type sd;
  };
  struct internal_type {
  };
  odin(const dust::pars_type<odin>& pars) :
    shared(pars.shared), internal(pars.internal) {
  }
  size_t size() {
    return 1;
  }
  std::vector<real_type> initial(size_t time, rng_state_type& rng_state) {
    std::vector<real_type> state(1);
    state[0] = shared->initial_y;
    return state;
  }
  void update(size_t time, const real_type * state, rng_state_type& rng_state, real_type * state_next) {
    const real_type y = state[0];
    state_next[0] = dust::random::uniform<real_type>(rng_state, y, shared->sd);
  }
private:
  std::shared_ptr<const shared_type> shared;
  internal_type internal;
};
};

// ...[some utility code excluded]
dust::pars_type<odin> dust_pars<odin>(cpp11::list user) {
  using real_type = typename odin::real_type;
  auto shared = std::make_shared<odin::shared_type>();
  odin::internal_type internal;
  shared->initial_y = 0;
  shared->sd = NA_REAL;
  shared->sd = user_get_scalar<real_type>(user, "sd", shared->sd, NA_REAL, NA_REAL);
  return dust::pars_type<odin>(shared, internal);
}

We have designed these two systems to play well together so the user can write models at a very high level and generate code that then works well within this framework and efficiently run in parallel. In sircovid this is used in a model with hundreds of logical compartments each of which may be structured, but the interface at the R level remains the same as for the toy models used in the documentation here.



mrc-ide/dust documentation built on May 11, 2024, 1:08 p.m.