Description Usage Arguments Details Value References
Spatial point process model using INLA. This function is essentially a specialized wrapper over inla
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
formula |
A formula that only relates the response |
sPoints |
A |
ppWeight |
An object of class |
explanaMesh |
An object of class |
smooth |
A single value ranging between 0 and 2 passed to |
prior.range |
A vector of length 2, with (range0, Prange) specifying that P(ρ < ρ_0) = p_ρ, where ρ is the spatial range of the random field. If Prange is NA, then range0 is used as a fixed range value. Default is c(0.05, 0.01). |
prior.sigma |
A vector of length 2, with (sigma0, Psigma) specifying that P(σ > σ_0) = p_σ, where σ is the marginal standard deviation of the field. If Psigma is NA, then sigma0 is used as a fixed range value. Default is c(1, 0.01). |
many |
Logical. Whether the data in |
fix |
A vector with the name of variables in the model that should be fixed to a given value when doing predictions. These values are used to map the intensities across the study area for a given value. Currently, the maximum of each variable is used as the fixed value, but it should be made more flexible in the future for example for playing more easily with climate change scenarios. Default is |
sboffset |
A character string with the name of the variable in the raster stack that should be used as an offset to scaled down the integration weights according to the level of effort across the study region. See details for further explanations. Default is |
orthoCons |
Set to |
... |
Arguments passed to |
If the argument many = TRUE
, the estimation and the prediction will be carried out solely at the mesh edges, whereas when many = FALSE
the estimation will be carried out at the mesh edges and at the sampled location. When the number of samples is very large (e.g. tens of thousands of samples or more) using many = TRUE
can be much more computationally efficient. However, there is a precision trade-off. When many = TRUE
, each sample is associated to an edge and the model is constructed using the number of samples associated to an edge as an importance value. In doing so, some spatial precision is lost at the expense of speed.
It is possible to build a model without any explanatory variables by defining the formula as:
y ~ -1
or y ~ 0
Using this formulation, it is not necessary to rerun the explanaMesh
function. However, define a model with the formula y ~ 1
will return an error because the model implemented always includes an intercept.
The sampling bias offset argument sboffset
is used to scaled down the weights (w) obtained from the dual mesh using a variable representing effort. This variable has to be a layer in the raster stack given for the predictors. Specifically, values in the raster layer given will be summed for each polygon in the dual mesh to summarize the effort for each polygon. The extraction is made exact by using the exactextractr package. Once summed, values for each polygon (e) are 1) scaled with the weights, 2) rescaled between 0 and 1 and 3) multiplied with the original weights ((e/w) / max(e/w)) * w to adjust the weights in the integration mesh. This is an adaptation from Simpson et al. (2016). Note that polygons from the dual mesh that are partially overlapping the region of interest will get the weight associated with their area overlapping the study region and the effort considered is the effort associated with this overlapping area.
An object of class ppSpace
that includes a model output, which is the model output of INLA.
In addition, it includes a series of attributes:
|
The formula used to construct the model |
|
A |
|
A matrix with all the explanatory variables used to construct the model. If there were factors in the original set of explanatory variables |
|
A matrix with all the explanatory variables used to construct the model. If there were factors in the original set of explanatory variables |
|
An object of class |
|
An object of class |
Simpson, D. Illian, J. B., Lindgren, F. Sørbye, S. H. and Rue, H. 2016. Going off grid: computationally efficient inference for log-Gaussian Cox processes. Biometrika, 103(1): 49-70 https://doi.org/10.1093/biomet/asv064
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.