# predGP: GP Prediction/Kriging In laGP: Local Approximate Gaussian Process Regression

## Description

Perform Gaussian processes prediction (under isotropic or separable formulation) at new `XX` locations using a GP object stored on the C-side

## Usage

 ```1 2``` ```predGP(gpi, XX, lite = FALSE, nonug = FALSE) predGPsep(gpsepi, XX, lite = FALSE, nonug = FALSE) ```

## Arguments

 `gpi` a C-side GP object identifier (positive integer); e.g., as returned by `newGP` `gpsepi` similar to `gpi` but indicating a separable GP object, as returned by `newGPsep` `XX` a `matrix` or `data.frame` containing a design of predictive locations `lite` a scalar logical indicating whether (`lite = FALSE`, default) or not (`lite = TRUE`) a full predictive covariance matrix should be returned, as would be required for plotting random sample paths, but substantially increasing computation time if only point-prediction is required `nonug` a scalar logical indicating if a (nonzero) nugget should be used in the predictive equations; this allows the user to toggle between visualizations of uncertainty due just to the mean, and a full quantification of predictive uncertainty. The latter (default `nonug = FALSE`) is the standard approach, but the former may work better in some sequential design contexts. See, e.g., `ieciGP`

## Details

Returns the parameters of Student-t predictive equations. By default, these include a full predictive covariance matrix between all `XX` locations. However, this can be slow when `nrow(XX)` is large, so a `lite` options is provided, which only returns the diagonal of that matrix.

GP prediction is sometimes called “kriging”, especially in the spatial statistics literature. So this function could also be described as returning evaluations of the “kriging equations”

## Value

The output is a `list` with the following components.

 `mean ` a vector of predictive means of length `nrow(Xref)` `Sigma` covariance matrix of for a multivariate Student-t distribution; alternately if `lite = TRUE`, then a field `s2` contains the diagonal of this matrix `df ` a Student-t degrees of freedom scalar (applies to all `XX`)

## Author(s)

Robert B. Gramacy [email protected]

## References

For standard GP prediction, refer to any graduate text, e.g., Rasmussen & Williams Gaussian Processes for Machine Learning

`vignette("laGP")`, `newGP`, `mleGP`, `jmleGP`,
 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46``` ```## a "computer experiment" -- a much smaller version than the one shown ## in the aGP docs ## Simple 2-d test function used in Gramacy & Apley (2015); ## thanks to Lee, Gramacy, Taddy, and others who have used it before f2d <- function(x, y=NULL) { if(is.null(y)) { if(!is.matrix(x) && !is.data.frame(x)) x <- matrix(x, ncol=2) y <- x[,2]; x <- x[,1] } g <- function(z) return(exp(-(z-1)^2) + exp(-0.8*(z+1)^2) - 0.05*sin(8*(z+0.1))) z <- -g(x)*g(y) } ## design with N=441 x <- seq(-2, 2, length=11) X <- expand.grid(x, x) Z <- f2d(X) ## fit a GP gpi <- newGP(X, Z, d=0.35, g=1/1000) ## predictive grid with NN=400 xx <- seq(-1.9, 1.9, length=20) XX <- expand.grid(xx, xx) ZZ <- f2d(XX) ## predict p <- predGP(gpi, XX, lite=TRUE) ## RMSE: compare to similar experiment in aGP docs sqrt(mean((p\$mean - ZZ)^2)) ## visualize the result par(mfrow=c(1,2)) image(xx, xx, matrix(p\$mean, nrow=length(xx)), col=heat.colors(128), xlab="x1", ylab="x2", main="predictive mean") image(xx, xx, matrix(p\$mean-ZZ, nrow=length(xx)), col=heat.colors(128), xlab="x1", ylab="x2", main="bas") ## clean up deleteGP(gpi) ## see the newGP and mleGP docs for examples using lite = FALSE for ## sampling from the joint predictive distribution ```