knitr::opts_chunk$set( echo = TRUE, warning = FALSE, cache = !TRUE, cache.path = "cache/", collapse = TRUE, comment = "#>", dev = "png", dev.args = list(type = "cairo-png"), fig.width = 7, fig.height = 5 )
In this vignette we illustrate how to fit some of the spacetime models in @lindgren2023, for the dataset analysed in @cameletti2013. To perform this we will use the Bayesian paradigm with theINLA package, using the features provided by the inlabru package to facilitate the coding.
We start loading the required packages and those for doing the visualizations, the ggplot2 and patchwork packages.
library(ggplot2) library(patchwork) library(INLA) library(INLAspacetime) library(inlabru)
We will ask it to return the WAIC, DIC and CPO
ctrc <- list( waic = TRUE, dic = TRUE, cpo = TRUE)
We will use the dataset analysed in @cameletti2013, that can be downloaded as follows. First, we set the filenames
u0 <- paste0( "http://inla.r-inla-download.org/", "r-inla.org/case-studies/Cameletti2012/") coofl <- "coordinates.csv" datafl <- "Piemonte_data_byday.csv" bordersfl <- "Piemonte_borders.csv"
Download and read the borders file
### get the domain borders if(!file.exists(bordersfl)) download.file(paste0(u0, bordersfl), bordersfl) dim(pborders <- read.csv(bordersfl))
Download and read the coordinates file
### get the coordinates if(!file.exists(coofl)) download.file(paste0(u0, coofl), coofl) dim(locs <- read.csv(coofl))
Download and read the dataset
### get the dataset if(!file.exists(datafl)) download.file(paste0(u0, datafl), datafl) dim(pdata <- read.csv(datafl))
Inspect the dataset
head(pdata)
Prepare the time to be used
range(pdata$Date <- as.Date(pdata$Date, "%d/%m/%y")) pdata$time <- as.integer(difftime( pdata$Date, min(pdata$Date), units = "days")) + 1
Standardize the covariates that will be used in the data analysis
and define a dataset including the needed information
where the outcome is the log of PM10
,
as used in @cameletti2013.
### prepare the covariates xnames <- c("A", "WS", "TEMP", "HMIX", "PREC", "EMI") xmean <- colMeans(pdata[, xnames]) xsd <- sapply(pdata[xnames], sd) ### prepare the data (st loc, scale covariates and log PM10) dataf <- data.frame(pdata[c("UTMX", "UTMY", "time")], scale(pdata[xnames], xmean, xsd), y = log(pdata$PM10)) str(dataf)
We consider the following linear mixed model for the outcome [ \mathbf{y} = \mathbf{W}\mathbf{\beta} + \mathbf{A}\mathbf{u} + \mathbf{e} ] where $\beta$ are fixed effects, or regression coefficients including the intercept, for the matrix of covariates $\mathbf{W}$, $\mathbf{u}$ is the spatio-temporal random effect having the matrix $\mathbf{A}$ the projector matrix from the discretized domain to the data. The spatio-temporal random effect $\mathbf{u}$ is defined in a continuous spacetime domain being discretized considering meshes over time and space. The difference from @cameletti2013 is that we now use the models in @lindgren2023 for $\mathbf{u}$.
Define a temporal mesh, with each knot spaced by h
,
where h = 1
means one per day.
nt <- max(pdata$time) h <- 1 tmesh <- inla.mesh.1d( loc = seq(1, nt + h/2, h), degree = 1) tmesh$n
Define a spatial mesh, the same used in @cameletti2013.
smesh <- inla.mesh.2d( cbind(locs[,2], locs[,3]), loc.domain = pborders, max.edge = c(50, 300), offset = c(10, 140), cutoff = 5, min.angle = c(26, 21)) smesh$n
Visualize the spatial mesh, the border and the locations.
par(mfrow = c(1,1), mar = c(0,0,1,0)) plot(smesh, asp = 1) lines(pborders, lwd = 2, col = "green4") points(locs[, 2:3], pch = 19, col = "blue")
We set the prior for the likelihood precision considering a PC-prior, @simpson2017pcprior, through the following probabilistic statements: P($\sigma_e > U_{\sigma_e}$) = $\alpha_{\sigma_e}$, using $U_{\sigma_e}$ = 1 and $\alpha_{\sigma_e} = 0.05$.
lkprec <- list( prec = list(prior = "pcprec", param = c(1, 0.05)))
With inlabru we can define the likelihood model with
the like()
function and use it for fitting models with
different linear predictors later.
lhood <- like( formula = y ~ ., family = "gaussian", control.family = list( hyper = lkprec), data = dataf)
The linear predictor, the right-rand side of the formula, can be defined using the same expression for of the both models that we are going to fit and is
M <- ~ -1 + Intercept(1) + A + WS + TEMP + HMIX + PREC + EMI + field(list(space = cbind(UTMX, UTMY), time = time), model = stmodel)
The implementation of the spacetime model uses the cgeneric
interface in INLA, see its documentation for details.
Therefore we have a C
code to mainly build the precision matrix and
compute the model parameter priors and compiled as static library.
We have this code included in the INLAspacetime package but it is also
being copied to the INLA package and compiled with the same compilers
in order to avoid possible mismatches.
In order to use it, we have to define the matrices and vectors needed,
including the prior parameter definitions.
The class of models in @lindgren2023 have the spatial range, temporal range and marginal standard deviation as parameters. We consider the PC-prior, as in @fuglstad2015pcmatern, for these parameters defined from the probability statements: P($r_s<U_{r_s}$)=$\alpha_{r_s}$, P($r_t<U_{r_t}$)=$\alpha_{r_t}$ and P($\sigma<U_{\sigma}$)=$\alpha_{\sigma}$. We consider $U_{r_s}=100$, $U_{r_t}=5$ and $U_{\sigma}=2$. $\alpha_{r_s}=\alpha_{r_t}=\alpha_{\sigma}=0.05$
The selection of one of the models in @lindgren2023 is by chosing the $\alpha_t$, $\alpha_s$ and $\alpha_e$ as integer numbers. We will start considering the model $\alpha_t=1$, $\alpha_s=0$ and $\alpha_t=2$, which is a model with separable spatio-temporal covariance, and then we fit some of the other models later.
stmodel <- stModel.define( smesh, tmesh, model, control.priors = list( prs = c(150, 0.05), prt = c(10, 0.05), psigma = c(5, 0.05)), constr = TRUE)
We define an object with the needed
use the function stModel.define()
where the model is selected considering the values for
$\alpha_t$, $\alpha_s$ and $\alpha_e$ collapsed.
In order to illustrate how it is done,
we can set an overall integrate-to-zero constraint,
which is not need but helps model components identification.
It uses the weights based on the mesh node volumes,
from both the temporal and spatial meshes.
This can be set automatically when defining the model
by adding constr = TRUE
.
model <- "102" <<stmodeldef>>
Initial values for the hyper-parameters help to fit the models in less computing time. It is also important to consider in the light that each dataset has its own parameter scale. For example, we have to consider that the spatial domain within a box of around $203.7$ by $266.5$ kilometers, which we already did when building the mesh and setting the prior for or $r_s$.
We can set initial values for the log of the parameters so that it would take less iterations to converge:
theta.ini <- c(4, 7, 7, 1)
bru(M, lhood, options = list( control.mode = list(theta = theta.ini, restart = TRUE), control.compute = ctrc))
The code to fit the model through inlabru is
fit102 <- <<fitcode>>
Summary of the posterior marginal distributions for the fixed effects
fit102$summary.fixed[, c(1, 2, 3, 5)]
For the hyperparameters, we transform the posterior marginal distributions for the model hyperparameters from the ones computed in internal scale, $\log(1/\sigma^2_e)$, $\log(r_s)$, $\log(r_t)$ and $\log(\sigma)$, to the user scale parametrization, $\sigma_e$, $r_s$, $r_t$ and $\sigma$, respectivelly.
post.h <- list( sigma_e = inla.tmarginal(function(x) exp(-x/2), fit102$internal.marginals.hyperpar[[1]]), range_s = inla.tmarginal(function(x) exp(x), fit102$internal.marginals.hyperpar[[2]]), range_t = inla.tmarginal(function(x) exp(x), fit102$internal.marginals.hyperpar[[3]]), sigma_u = inla.tmarginal(function(x) exp(x), fit102$internal.marginals.hyperpar[[4]]) )
Then we compute and show the summary of it
shyper <- t(sapply(post.h, function(m) unlist(inla.zmarginal(m, silent = TRUE)))) shyper[, c(1, 2, 3, 7)]
However, it is better to look at the posterior marginal itself, and we will visualize it later.
The model fitted in @cameletti2013 includes two more covariates and setup a model for discrete temporal domain where the temporal correlation is modeled as a first order autoregression with parameter $\rho$. In the fitted model here is defined considering continuous temporal domain with the range parameter $r_s$. However, the first order autocorrelation could be taken as $\rho = \exp(-h\sqrt{8\nu}/r_s)$, where $h$ is the temporal resolution used in the temporal mesh and $\nu$ is equal $0.5$ for the fitted model. We can compare ou results with Table 3 in @cameletti2013 with
c(shyper[c(1, 4, 2), 1], a = exp(-h * sqrt(8 * 0.5) / shyper[3, 1]))
We now fit the model $121$ for $u$ as well, we use the same code for building the model matrices
model <- "121" <<stmodeldef>>
and use the same code for fitting as follows
fit121 <- <<fitcode>>
We will join these fits into a list object to make it easier working with it
results <- list("u102" = fit102, "u121" = fit121)
The computing time for each model fit
sapply(results, function(r) r$cpu.used)
and the number of fn-calls during the optimization are
sapply(results, function(r) r$misc$nfunc)
The posterior mode for each parameter in each model (in internal scale) are
sapply(results, function(r) r$mode$theta)
We compute the posterior marginal distribution for the hyper-parameters in the user-interpretable scale, like we did before for the first model, with
posts.h2 <- lapply(1:2, function(m) vector("list", 4L)) for(m in 1:2) { posts.h2[[m]]$sigma_e = data.frame( parameter = "sigma_e", inla.tmarginal( function(x) exp(-x/2), results[[m]]$internal.marginals.hyperpar[[1]])) for(p in 2:4) { posts.h2[[m]][[p]] <- data.frame( parameter = c(NA, "range_s", "range_t", "sigma_u")[p], inla.tmarginal( function(x) exp(x), results[[m]]$internal.marginals.hyperpar[[p]]) ) } }
Join these all to make visualization easier
posts.df <- rbind( data.frame(model = "102", do.call(rbind, posts.h2[[1]])), data.frame(model = "121", do.call(rbind, posts.h2[[2]])) ) ggplot(posts.df) + geom_line(aes(x = x, y = y, group = model, color = model)) + ylab("Density") + xlab("") + facet_wrap(~parameter, scales = "free")
The comparison of the model parameters of $\mathbf{u}$ for different models have to be done in light with the covariance functions as illustrated in @lindgren2023. The fitted $\sigma_e$ by the different models are comparable and we can see that when considering model $121$ for $\mathbf{u}$, its posterior marginal are concentrated in values lower than when considering model $102$.
We can look at the posterior mean of $u$ from both models and see that under model '121' there is a wider spread.
par(mfrow = c(1, 1), mar = c(3, 3, 0, 0.0), mgp = c(2, 1, 0)) uu.hist <- lapply(results, function(r) hist(r$summary.random$field$mean, -60:60/20, plot = FALSE)) ylm <- range(uu.hist[[1]]$count, uu.hist[[2]]$count) plot(uu.hist[[1]], ylim = ylm, col = rgb(1, 0.1, 0.1, 1.0), border = FALSE, xlab = "u", main = "") plot(uu.hist[[2]], add = TRUE, col = rgb(0.1, 0.1, 1, 0.5), border = FALSE) legend("topleft", c("separable", "non-separable"), fill = rgb(c(1,0.1), 0.1, c(0.1, 1), c(1, 0.5)), border = 'transparent', bty = "n")
We can also check fitting statistics such as DIC, WAIC, the negative of the log of the probability ordinates (LPO) and its cross-validated version (LCPO), summarized as the mean.
t(sapply(results, function(r) { c(DIC = mean(r$dic$local.dic, na.rm = TRUE), WAIC = mean(r$waic$local.waic, na.rm = TRUE), LPO = -mean(log(r$po$po), na.rm = TRUE), LCPO = -mean(log(r$cpo$cpo), na.rm = TRUE)) }))
One may be interested in evaluating the model prediction. The leave-one-out strategy was already available in INLA since several years ago, see @Heldetal2010cpo for details. Recently, an automatic group cross validation strategy was implemented, see @zhedongH2023gcpo for details.
g5cv <- lapply( results, inla.group.cv, num.level.sets = 5, strategy = "posterior", size.max = 50)
We can inspect the detected observations that have the posterior linear predictor correlated with each one, including itself. For 100th observation under model "102" we have
g5cv$u102$group[[100]]
and for the result under model "121" we have
g5cv$u121$group[[100]]
which has intersection but are not the same, for the model setup used.
We can check which are these observations in the dataset
dataf[g5cv$u102$group[[100]]$idx, ]
and found that most are at the same locations in nearby time.
We can compute the negative of the mean of the log score so that lower number is better
sapply(g5cv, function(r) -mean(log(r$cv), na.rm = TRUE))
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.