Description Usage Arguments Details Value References See Also Examples
Finite Gibbs hardcore model has repulsion radius at s given by some latent field R(s). Given a point pattern x, we estimate R using Bayesian MCMC.
1 2 3 4 5 6 7 8 9 10 11 |
x |
Data point pattern, best given in the spatstat's |
iter |
Number of iterations in the MCMC. |
ncol |
Latent field grid x-dimension. y-dimension is computed using |
nu |
Differentiability (smoothness) parameter of Mat\'ern covariance. Supports 1,2,3. |
range |
The range parameter. |
mupars |
Vector of parameters for the mean parameter, see “Details”. |
taupars |
Vector of parameters for the precision parameter, see “Details”. |
Rpars |
Vector of parameters modifying the fitting of the latent field, see “Details”. |
beta |
The expected packing density parameter, see “Details”. |
predict.R |
Frequency for predicting R-values outside datapoints using conditional simulation. |
predict.x |
Frequency for predicting the point process, see “Details”. |
RtoNND |
Initial R(s), s in x, values are nearest neighbour distances multiplied with ‘RtoNND’. |
type |
Type of repulsion. HC is hard-core, HS is hard-spheres (not yet supported). |
seed |
Random seed. If not given a system time based random seed is used. |
dbg |
Verbose output during MCMC. From 0 upwards. |
print.prefix |
Prefix for run-time messages when dbg>0. (useful for large estimation runs with several nu, beta, etc. parameters). |
expand, cyclic |
Parameters for the field simulation, see |
The function fits a finite Gibbs hard-core model where the hard-core radius, say R, depends on the spatial location and varies smoothly in the window. We use an exponentially transformed Mat\'ern field as the prior for the smooth variation.
Give data in ppp
-format (or as a result of simulateRFHC
). The size of data is denoted by ‘n’ in what follows.
We estimate the latent, smoothly varying radius field R using a log-Gaussian prior. The Gibbs likelihood is replaced with a ‘modified pseudo-likelihood’. Instead of using the Papangelou-intensity, which involves integration over the unknown R, we model the discrepancy between the nearest neighbour distances of data and the R-values at data locations. If ‘Ri’ denotes the R(x_i) value and ‘di’ the nearest neighbour distance of point ‘x_i’, the connection between the R and d values is modelled assuming independence of the terms
(di-Ri)/di ~ Beta(1, beta) .
The ‘beta’ parameter controls the amount of discrepancy we allow between ‘Ri’ and ‘di’. If set to 1 the model only says that 'Ri is between 0 and di', and if set very high we assume that 'Ri is di'. In practice it should be between 5-25, amounting to approximate packing densities, i.e. the amount of points the field can support, of 50-100%. The number depends also on the smoothness of the field. See the referenced article for more details.
The hyper priors are given in vector form as follows:
* mupars=c(free=1, init=NA, m=0, s2=100): First element (‘free’) flags whether the mean parameter is to be estimated. The initial value (‘init’) can be missing and if so, is set to the mean of the initial log R-values (see below). The prior for mean is Gaussian with parameters ‘m’ and ‘s2’. We use a Gibbs step so no need to define proposal distribution.
* taupars=c(free=0, init=5, F=1.05, alpha=0.25, beta=0.005): First two like in mupars. For proposal we use tau'=f*tau where ‘f’ is Lambert distibuted with parameter ‘F’. The prior is Gamma(alpha, beta).
Note 1: Fixed values can be given also in the form ‘mupars=c(free=0, <init>)’ or just ‘mupars=<init>’ where ‘<init>’ is your fixed value. Same for tau.
Note 2: The range is best set to some fixed value as data has very little info on it. If you must, you can make it free using the syntax of ‘taupars’. Be aware that this increases computations enormously as we need to factorise the n^2 precision matrix each step.
We primarily estimate the R values at data points, say ‘Robs=(R1, R2,...,Rn)’. Then when a prediction outside datapoints, say ‘Rs’ with union(Rs, Robs)=R, is requested every min(predict.x, predict.R) step, we do a conditional simulation to ‘Rs’ given ‘Robs’.
We also sample from the marginal posterior of the point process x every ‘predict.x’ step. When this is requested, the algorithm first forces prediction of ‘Rs’,
and then uses simulateRFHC
with the current ‘R=(Robs, Rs)’ as parameter ‘pre.r.field’. Point count n and window are like in data.
The initial ‘Robs’ values are set to nearest neighbour distances multiplied with ‘RtoNND’.
Each iteration we attempt to update ‘Rpars$choose’ number of randomly chosen ‘Robs’ points. If ‘Rpars$from’ is NULL it is taken as the vector (1,...,n). Otherwise you can give a subset to keep some locations fixed to their initial values.
Object of class rfhcFit
. It contains the following elements:
$mu, $tau : The histories. If fixed, single value.
$range : Fixed value. If free, the history.
$Robs : (iter x n) matrix. Each row is the state of Robs, so each column i is the history of R(x_i).
$Rpred : Matrix. Each row is the prediction Rs union Robs. Number of rows depends on predict.R and predict.x.
$time : Time used.
$pphist : List with x predictions.
$options: Options set for the algorithm excluding hyper parameters, R update and beta.
$dataholders : Indices of R-grid (in columnwise order) of those R-values that hold a datapoint, i.e. R[dataholders]=Robs.
$acceptance : Number of accepted steps, separate for Robs and hyper parameters.
$parameters : The hyper parameter options, R update options and beta.
$data : The given data object, ‘x’.
$Rlast : Last state of R as an im
-object. Note that depending on the ‘predict.R/predict.x’ parameter the Rs values might not be up-to-date with Robs.
Rajala, T. and Penttinen, A. (2012): Bayesian analysis of a Gibbs hard-core point pattern model with varying repulsion range, Comp. Stat. Data An.
plot.rfhcFit
-method, summary.rfhcFit
-method, print.rfhcFit
-method, plotRtraj.rfhcFit
-function, plotRpost.rfhcFit
-method.
1 2 3 | x <- simulateRFHC()
y <- fitRFHC(x)
## Not run: plot(y)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.