predict  R Documentation 
Acts on a gp
, dgp2
, or dgp3
object.
Calculates posterior mean and variance/covariance over specified input
locations. Optionally calculates expected improvement (EI) or entropy
over candidate inputs. Optionally utilizes SNOW parallelization.
## S3 method for class 'gp'
predict(
object,
x_new,
lite = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
## S3 method for class 'dgp2'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
## S3 method for class 'dgp3'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
## S3 method for class 'gpvec'
predict(
object,
x_new,
m = object$m,
ordering_new = NULL,
lite = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
## S3 method for class 'dgp2vec'
predict(
object,
x_new,
m = object$m,
ordering_new = NULL,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
## S3 method for class 'dgp3vec'
predict(
object,
x_new,
m = object$m,
ordering_new = NULL,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
return_all = FALSE,
EI = FALSE,
entropy_limit = NULL,
cores = 1,
...
)
object 
object from 
x_new 
matrix of predictive input locations 
lite 
logical indicating whether to calculate only pointwise
variances ( 
return_all 
logical indicating whether to return mean and pointwise
variance prediction for ALL samples (only available for 
EI 
logical indicating whether to calculate expected improvement (for minimizing the response) 
entropy_limit 
optional limit state for entropy calculations (separating
passes and failures), default value of 
cores 
number of cores to utilize in parallel 
... 
N/A 
store_latent 
logical indicating whether to store and return mapped values of latent layers (two or three layer models only) 
mean_map 
logical indicating whether to map hidden layers using
conditional mean ( 
m 
size of Vecchia conditioning sets (only for fits with

ordering_new 
optional ordering for Vecchia approximation, must correspond
to rows of 
All iterations in the object are used for prediction, so samples
should be burnedin. Thinning the samples using trim
will speed
up computation. Posterior moments are calculated using conditional
expectation and variance. As a default, only pointwise variance is
calculated. Full covariance may be calculated using lite = FALSE
.
Expected improvement is calculated with the goal of minimizing the response. See Chapter 7 of Gramacy (2020) for details. Entropy is calculated based on two classes separated by the specified limit. See Sauer (2023, Chapter 3) for details.
SNOW parallelization reduces computation time but requires more memory storage.
object of the same class with the following additional elements:
x_new
: copy of predictive input locations
mean
: predicted posterior mean, indices correspond to
x_new
locations
s2
: predicted pointwise variances, indices correspond to
x_new
locations (only returned when lite = TRUE
)
mean_all
: predicted posterior mean for each sample (column
indices), only returned when return_all = TRUE
s2_all
predicted pointwise variances for each sample (column
indices), only returned when returnall = TRUE
Sigma
: predicted posterior covariance, indices correspond to
x_new
locations (only returned when lite = FALSE
)
EI
: vector of expected improvement values, indices correspond
to x_new
locations (only returned when EI = TRUE
)
entropy
: vector of entropy values, indices correspond to
x_new
locations (only returned when entropy_limit
is
numeric)
w_new
: list of hidden layer mappings (only returned when
store_latent = TRUE
), list index corresponds to iteration and
row index corresponds to x_new
location (two or three layer
models only)
z_new
: list of hidden layer mappings (only returned when
store_latent = TRUE
), list index corresponds to iteration and
row index corresponds to x_new
location (three layer models only)
Computation time is added to the computation time of the existing object.
Sauer, A. (2023). Deep Gaussian process surrogates for computer experiments.
*Ph.D. Dissertation, Department of Statistics, Virginia Polytechnic Institute and State University.*
Sauer, A., Gramacy, R.B., & Higdon, D. (2023). Active learning for deep
Gaussian process surrogates. *Technometrics, 65,* 418. arXiv:2012.08015
Sauer, A., Cooper, A., & Gramacy, R. B. (2023). Vecchiaapproximated deep Gaussian
processes for computer experiments.
*Journal of Computational and Graphical Statistics,* 114. arXiv:2204.02904
# See "fit_one_layer", "fit_two_layer", or "fit_three_layer"
# for an example
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.