# predict: Predict posterior mean and variance/covariance In deepgp: Deep Gaussian Processes using MCMC

 predict R Documentation

## Predict posterior mean and variance/covariance

### Description

Acts on a `gp`, `dgp2`, or `dgp3` object. Calculates posterior mean and variance/covariance over specified input locations. Optionally calculates expected improvement (EI) over candidate inputs. Optionally utilizes SNOW parallelization.

### Usage

```## S3 method for class 'gp'
predict(object, x_new, lite = TRUE, EI = FALSE, cores = 1, ...)

## S3 method for class 'dgp2'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
EI = FALSE,
cores = 1,
...
)

## S3 method for class 'dgp3'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
EI = FALSE,
cores = 1,
...
)

## S3 method for class 'gpvec'
predict(object, x_new, m = object\$m, lite = TRUE, cores = 1, ...)

## S3 method for class 'dgp2vec'
predict(
object,
x_new,
m = object\$m,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
cores = 1,
...
)

## S3 method for class 'dgp3vec'
predict(
object,
x_new,
m = object\$m,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
cores = 1,
...
)
```

### Arguments

 `object` object from `fit_one_layer`, `fit_two_layer`, or `fit_three_layer` with burn-in already removed `x_new` matrix of predictive input locations `lite` logical indicating whether to calculate only point-wise variances (`lite = TRUE`) or full covariance (`lite = FALSE`) `EI` logical indicating whether to calculate expected improvement (for minimizing the response) `cores` number of cores to utilize in parallel, defaults to available cores minus one `...` N/A `store_latent` logical indicating whether to store and return mapped values of latent layers (two or three layer models only) `mean_map` logical indicating whether to map hidden layers using conditional mean (`mean_map = TRUE`) or using a random sample from the full MVN distribution (two or three layer models only), `mean_map = FALSE` is not yet implemented for fits with `vecchia = TRUE` `m` size of Vecchia conditioning sets (only for fits with `vecchia = TRUE`), defaults to the `m` used for MCMC

### Details

All iterations in the object are used for prediction, so samples should be burned-in. Thinning the samples using `trim` will speed up computation. Posterior moments are calculated using conditional expectation and variance. As a default, only point-wise variance is calculated. Full covariance may be calculated using `lite = FALSE`.

Expected improvement is calculated with the goal of minimizing the response. See Chapter 7 of Gramacy (2020) for details.

SNOW parallelization reduces computation time but requires more memory storage.

### Value

object of the same class with the following additional elements:

• `x_new`: copy of predictive input locations

• `mean`: predicted posterior mean, indices correspond to `x_new` locations

• `s2`: predicted point-wise variances, indices correspond to `x_new` locations (only returned when `lite = TRUE`)

• `s2_smooth`: predicted point-wise variances with `g` removed, indices correspond to `x_new` locations (only returned when `lite = TRUE`)

• `Sigma`: predicted posterior covariance, indices correspond to `x_new` locations (only returned when `lite = FALSE`)

• `Sigma_smooth`: predicted posterior covariance with `g` removed from the diagonal (only returned when `lite = FALSE`)

• `EI`: vector of expected improvement values, indices correspond to `x_new` locations (only returned when `EI = TRUE`)

• `w_new`: list of hidden layer mappings (only returned when `store_latent = TRUE`), list index corresponds to iteration and row index corresponds to `x_new` location (two or three layer models only)

• `z_new`: list of hidden layer mappings (only returned when `store_latent = TRUE`), list index corresponds to iteration and row index corresponds to `x_new` location (three layer models only)

Computation time is added to the computation time of the existing object.

### References

Sauer, A, RB Gramacy, and D Higdon. 2020. "Active Learning for Deep Gaussian Process Surrogates." Technometrics, to appear; arXiv:2012.08015.

Sauer, A, A Cooper, and RB Gramacy. 2022. "Vecchia-approximated Deep Gaussian Processes for Computer Experiments." pre-print on arXiv:2204.02904

Gramacy, RB. Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences. Chapman Hall, 2020.

### Examples

```# See "fit_one_layer", "fit_two_layer", or "fit_three_layer"
# for an example

```

deepgp documentation built on Dec. 28, 2022, 1:32 a.m.