Description Usage Arguments Details References See Also
Implementation of the Sparse Gaussian Process model for 3D spatial
interpolation. Extends the GP
class.
1 2 3 |
data |
A |
model |
The covariance model. A |
value |
The column name of the variable to be modeled. It is assumed the column does not contain missing values. |
mean |
The global mean. Irrelevant if a trend is provided. |
trend |
The model's trend component. A formula in character format. |
pseudo_inputs |
The desired number of pseudo-inputs (whose coordinates will be sampled from the data), a matrix or data frame with their coordinates, or a 3D object. |
force.interp |
Indices of points that must be interpolated exactly. |
reg.v |
Regularization to improve stability. A single value or a vector with length matching the number of data points. |
tangents |
A |
reg.t |
Regularization for structural data. A single value or a vector with length matching the number of structural data. |
pseudo_tangents |
The desired number of pseudo-structural data (whose
coordinates will be sampled from the data) or a |
variational |
Use the variational approach? |
This method builds a SPGP
object with all the information
needed to make preditions at new data points.
trend
must be a character string with a formula as a function of
uppercase X, Y, and Z. The most common is the linear trend,
"~ X + Y + Z"
. For ordinary kriging, use "~1"
. If neither
trend
nor mean
are given, it is assumed that the global mean
is the mean of the data values.
The SPGP works by compressing the information coming from all data into a small number of pseudo-inputs. This way computational gains are obtained, but the resulting model may pose difficulties for training.
Given the sparse nature of the model, the points specified in force.interp
may still not be interpolated exactly. The effects of tangent data may also be
diminished.
The variational model is cited in the literature as more stable and
less prone to overfitting. variational = F
corresponds to the
Fully Independent Conditional (FIC) approach.
Snelson, E., Ghahramani, Z., 2006. Sparse Gaussian Processes using Pseudo-inputs. Adv. Neural Inf. Process. Syst. 18 1257<e2><80><93>1264.
Titsias, M., 2009. Variational Learning of Inducing Variables in Sparse Gaussian Processes. Aistats 5, 567<e2><80><93>574.
Bauer, M.S., van der Wilk, M., Rasmussen, C.E., 2016. Understanding Probabilistic Sparse Gaussian Process Approximations. Adv. Neural Inf. Process. Syst. 29.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.