oneStepPredict: Calculate one-step-ahead (OSA) residuals for a latent...

View source: R/validation.R

oneStepPredictR Documentation

Calculate one-step-ahead (OSA) residuals for a latent variable model.


Calculate one-step-ahead (OSA) residuals for a latent variable model. (Beta version; may change without notice)


  obj, = NULL,
  data.term.indicator = NULL,
  method = c("oneStepGaussianOffMode", "fullGaussian", "oneStepGeneric",
    "oneStepGaussian", "cdf"),
  subset = NULL,
  conditional = NULL,
  discrete = NULL,
  discreteSupport = NULL,
  range = c(-Inf, Inf),
  seed = 123,
  parallel = FALSE,
  trace = TRUE,
  reverse = (method == "oneStepGaussianOffMode"),
  splineApprox = TRUE,



Output from MakeADFun.

Character naming the observation in the template.


Character naming an indicator data variable in the template (not required by all methods - see details).


Method to calculate OSA (see details).


Index vector of observations that will be added one by one during OSA. By default 1:length(observations) (with conditional subtracted).


Index vector of observations that are fixed during OSA. By default the empty set.


Logical; Are observations discrete? (assumed FALSE by default).


Possible outcomes of discrete part of the distribution (method="oneStepGeneric" and method="cdf" only).


Possible range of continuous part of the distribution (method="oneStepGeneric" only).


Randomization seed (discrete case only). If NULL the RNG seed is untouched by this routine (recommended for simulation studies).


Run in parallel using the parallel package?


Logical; Trace progress? More options available for method="oneStepGeneric" - see details.


Do calculations in opposite order to improve stability? (currently enabled by default for oneStepGaussianOffMode method only)


Represent one-step conditional distribution by a spline to reduce number of density evaluations? (method="oneStepGeneric" only).


Control parameters for OSA method


Given a TMB latent variable model this function calculates OSA standardized residuals that can be used for goodness-of-fit assessment. The approach is based on a factorization of the joint distribution of the observations X_1,...,X_n into successive conditional distributions. Denote by

F_n(x_n) = P(X_n \leq x_n | X_1 = x_1,...,X_{n-1}=x_{n-1} )

the one-step-ahead CDF, and by

p_n(x_n) = P(X_n = x_n | X_1 = x_1,...,X_{n-1}=x_{n-1} )

the corresponding point probabilities (zero for continuous distributions). In case of continuous observations the sequence


will be iid standard normal. These are referred to as the OSA residuals. In case of discrete observations draw (unit) uniform variables U_1,...,U_n and construct the randomized OSA residuals

\Phi^{-1}(F_1(X_1)-U_1 p_1(X_1))\:,...,\:\Phi^{-1}(F_n(X_n)-U_n p_n(X_n))

These are also iid standard normal.


data.frame with OSA standardized residuals in column residual. In addition, depending on the method, the output includes selected characteristics of the predictive distribution (current row) given past observations (past rows), notably the conditional


Expectation of the current observation


Standard deviation of the current observation


CDF at current observation


Density at current observation


Negative log density at current observation


Negative log of the lower CDF at current observation


Negative log of the upper CDF at current observation

given past observations. If column randomize is present, it indicates that randomization has been applied for the row.

Choosing the method

The user must specify the method used to calculate the residuals - see detailed list of method descriptions below. We note that all the methods are based on approximations. While the default 'oneStepGaussianoffMode' often represents a good compromise between accuracy and speed, it cannot be assumed to work well for all model classes. As a rule of thumb, if in doubt whether a method is accurate enough, you should always compare with the 'oneStepGeneric' which is considered the most accurate of the available methods.


This method assumes that the joint distribution of data and random effects is Gaussian (or well approximated by a Gaussian). It does not require any changes to the user template. However, if used in conjunction with subset and/or conditional a data.term.indicator is required - see the next method.


This method calculates the one-step conditional probability density as a ratio of Laplace approximations. The approximation is integrated (and re-normalized for improved accuracy) using 1D numerical quadrature to obtain the one-step CDF evaluated at each data point. The method works in the continuous case as well as the discrete case (discrete=TRUE).

It requires a specification of a data.term.indicator explained in the following. Suppose the template for the observations given the random effects (u) looks like

    nll -= dnorm(x(i), u(i), sd(i), true);

Then this template can be augmented with a data.term.indicator = "keep" by changing the template to

    nll -= keep(i) * dnorm(x(i), u(i), sd(i), true);

The new data vector (keep) need not be passed from R. It automatically becomes a copy of x filled with ones.

Some extra parameters are essential for the method. Pay special attention to the integration domain which must be set either via range (continuous case) or discreteSupport (discrete case). Both of these can be set simultanously to specify a mixed continuous/discrete distribution. For example, a non-negative distribution with a point mass at zero (e.g. the Tweedie distribution) should have range=c(0,Inf) and discreteSupport=0. Several parameters control accuracy and appropriate settings are case specific. By default, a spline is fitted to the one-step density before integration (splineApprox=TRUE) to reduce the number of density evaluations. However, this setting may have negative impact on accuracy. The spline approximation can then either be disabled or improved by noting that ... arguments are passed to tmbprofile: Pass e.g. ystep=20, ytol=0.1. Finally, it may be useful to look at the one step predictive distributions on either log scale (trace=2) or natural scale (trace=3) to determine which alternative methods might be appropriate.


This is a special case of the generic method where the one step conditional distribution is approximated by a Gaussian (and can therefore be handled more efficiently).


This is an approximation of the "oneStepGaussian" method that avoids locating the mode of the one-step conditional density.


The generic method can be slow due to the many function evaluations used during the 1D integration (or summation in the discrete case). The present method can speed up this process but requires more changes to the user template. The above template must be expanded with information about how to calculate the negative log of the lower and upper CDF:

    nll -= keep(i) * dnorm(x(i), u(i), sd(i), true);
    nll -= keep.cdf_lower(i) * log( pnorm(x(i), u(i), sd(i)) );
    nll -= keep.cdf_upper(i) * log( 1.0 - pnorm(x(i), u(i), sd(i)) );

The specialized members keep.cdf_lower and keep.cdf_upper automatically become copies of x filled with zeros.


######################## Gaussian case
osa.simple <- oneStepPredict(obj, = "x", method="fullGaussian")
qqnorm(osa.simple$residual); abline(0,1)

## Not run: 
######################## Poisson case (First 100 observations)
osa.ar1xar1 <- oneStepPredict(obj, "N", "keep", method="cdf", discrete=TRUE, subset=1:100)
qqnorm(osa.ar1xar1$residual); abline(0,1)

## End(Not run)

TMB documentation built on Nov. 27, 2023, 5:12 p.m.

Related to oneStepPredict in TMB...