predict: Predict posterior mean and variance/covariance

predictR Documentation

Predict posterior mean and variance/covariance

Description

Acts on a gp, dgp2, or dgp3 object. Calculates posterior mean and variance/covariance over specified input locations. Optionally calculates expected improvement (EI) or entropy over candidate inputs. Optionally utilizes SNOW parallelization.

Usage

## S3 method for class 'gp'
predict(
  object,
  x_new,
  lite = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

## S3 method for class 'dgp2'
predict(
  object,
  x_new,
  lite = TRUE,
  store_latent = FALSE,
  mean_map = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

## S3 method for class 'dgp3'
predict(
  object,
  x_new,
  lite = TRUE,
  store_latent = FALSE,
  mean_map = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

## S3 method for class 'gpvec'
predict(
  object,
  x_new,
  m = object$m,
  ordering_new = NULL,
  lite = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

## S3 method for class 'dgp2vec'
predict(
  object,
  x_new,
  m = object$m,
  ordering_new = NULL,
  lite = TRUE,
  store_latent = FALSE,
  mean_map = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

## S3 method for class 'dgp3vec'
predict(
  object,
  x_new,
  m = object$m,
  ordering_new = NULL,
  lite = TRUE,
  store_latent = FALSE,
  mean_map = TRUE,
  return_all = FALSE,
  EI = FALSE,
  entropy_limit = NULL,
  cores = 1,
  ...
)

Arguments

object

object from fit_one_layer, fit_two_layer, or fit_three_layer with burn-in already removed

x_new

matrix of predictive input locations

lite

logical indicating whether to calculate only point-wise variances (lite = TRUE) or full covariance (lite = FALSE)

return_all

logical indicating whether to return mean and point-wise variance prediction for ALL samples (only available for lite = TRUE)

EI

logical indicating whether to calculate expected improvement (for minimizing the response)

entropy_limit

optional limit state for entropy calculations (separating passes and failures), default value of NULL bypasses entropy calculations

cores

number of cores to utilize in parallel

...

N/A

store_latent

logical indicating whether to store and return mapped values of latent layers (two or three layer models only)

mean_map

logical indicating whether to map hidden layers using conditional mean (mean_map = TRUE) or using a random sample from the full MVN distribution (two or three layer models only), mean_map = FALSE is not yet implemented for fits with vecchia = TRUE

m

size of Vecchia conditioning sets (only for fits with vecchia = TRUE), defaults to the m used for MCMC

ordering_new

optional ordering for Vecchia approximation, must correspond to rows of x_new, defaults to random, is applied to all layers in deeper models

Details

All iterations in the object are used for prediction, so samples should be burned-in. Thinning the samples using trim will speed up computation. Posterior moments are calculated using conditional expectation and variance. As a default, only point-wise variance is calculated. Full covariance may be calculated using lite = FALSE.

Expected improvement is calculated with the goal of minimizing the response. See Chapter 7 of Gramacy (2020) for details. Entropy is calculated based on two classes separated by the specified limit. See Sauer (2023, Chapter 3) for details.

SNOW parallelization reduces computation time but requires more memory storage.

Value

object of the same class with the following additional elements:

  • x_new: copy of predictive input locations

  • mean: predicted posterior mean, indices correspond to x_new locations

  • s2: predicted point-wise variances, indices correspond to x_new locations (only returned when lite = TRUE)

  • mean_all: predicted posterior mean for each sample (column indices), only returned when return_all = TRUE

  • s2_all predicted point-wise variances for each sample (column indices), only returned when return-all = TRUE

  • Sigma: predicted posterior covariance, indices correspond to x_new locations (only returned when lite = FALSE)

  • EI: vector of expected improvement values, indices correspond to x_new locations (only returned when EI = TRUE)

  • entropy: vector of entropy values, indices correspond to x_new locations (only returned when entropy_limit is numeric)

  • w_new: list of hidden layer mappings (only returned when store_latent = TRUE), list index corresponds to iteration and row index corresponds to x_new location (two or three layer models only)

  • z_new: list of hidden layer mappings (only returned when store_latent = TRUE), list index corresponds to iteration and row index corresponds to x_new location (three layer models only)

Computation time is added to the computation time of the existing object.

References

Sauer, A. (2023). Deep Gaussian process surrogates for computer experiments. *Ph.D. Dissertation, Department of Statistics, Virginia Polytechnic Institute and State University.*

Sauer, A., Gramacy, R.B., & Higdon, D. (2023). Active learning for deep Gaussian process surrogates. *Technometrics, 65,* 4-18. arXiv:2012.08015

Sauer, A., Cooper, A., & Gramacy, R. B. (2023). Vecchia-approximated deep Gaussian processes for computer experiments. *Journal of Computational and Graphical Statistics, 32*(3), 824-837. arXiv:2204.02904

Barnett, S., Beesley, L. J., Booth, A. S., Gramacy, R. B., & Osthus D. (2024). Monotonic warpings for additive and deep Gaussian processes. *In Review.* arXiv:2408.01540

Examples

# See ?fit_one_layer, ?fit_two_layer, or ?fit_three_layer
# for examples


deepgp documentation built on Sept. 11, 2024, 8:30 p.m.