Maintainer: Annie S. Booth (annie_booth@ncsu.edu)
Performs Bayesian posterior inference for deep Gaussian processes following Sauer, Gramacy, and Higdon (2023). See Sauer (2023) for comprehensive methodological details and https://bitbucket.org/gramacylab/deepgp-ex/ for a variety of coding examples. Models are trained through MCMC including elliptical slice sampling of latent Gaussian layers and Metropolis-Hastings sampling of kernel hyperparameters. Vecchia-approximation for faster computation is implemented following Sauer, Cooper, and Gramacy (2023). Optional monotonic warpings are implemented following Barnett et al. (2024). Downstream tasks include sequential design through active learning Cohn/integrated mean squared error (ALC/IMSE; Sauer, Gramacy, and Higdon, 2023), optimization through expected improvement (EI; Gramacy, Sauer, and Wycoff, 2022), and contour location through entropy (Booth, Renganathan, and Gramacy, 2024). Models extend up to three layers deep; a one layer model is equivalent to typical Gaussian process regression. Incorporates OpenMP and SNOW parallelization and utilizes C/C++ under the hood.
Run help("deepgp-package")
or help(package = "deepgp")
for more information.
Sauer, A. (2023). Deep Gaussian process surrogates for computer experiments. Ph.D. Dissertation, Department of Statistics, Virginia Polytechnic Institute and State University. http://hdl.handle.net/10919/114845
Sauer, A., Gramacy, R.B., & Higdon, D. (2023). Active learning for deep Gaussian process surrogates. Technometrics, 65, 4-18. arXiv:2012.08015
Sauer, A., Cooper, A., & Gramacy, R. B. (2023). Vecchia-approximated deep Gaussian processes for computer experiments. Journal of Computational and Graphical Statistics, 1-14. arXiv:2204.02904
Gramacy, R. B., Sauer, A. & Wycoff, N. (2022). Triangulation candidates for Bayesian optimization. Advances in Neural Information Processing Systems (NeurIPS), 35, 35933-35945. arXiv:2112.07457
Booth, A. S., Renganathan, S. A. & Gramacy, R. B. (2024). Contour location for reliability in airfoil simulation experiments using deep Gaussian processes. In Review. arXiv:2308.04420
Barnett, S., Beesley, L. J., Booth, A. S., Gramacy, R. B., & Osthus D. (2024). Monotonic warpings for additive and deep Gaussian processes. In Review. arXiv:2408.01540
What's new in version 1.1.3?
monowarp = TRUE
to fit_two_layer
. Monotonic warpings trigger separable lengthscales on the outer layer.true_g = NULL
)fit_one_layer
What's new in version 1.1.2?
ordering
argument in fit
functions)lite = TRUE
predictions have been sped upcov(t(mu_t))
computation altogether (this is only necessary for lite = FALSE
)d_new
calculations diag_quad_mat
Cpp function more often clean_prediction
function as it was no longer neededfit_one_layer
with vecchia = TRUE
and sep = TRUE
caused by the arma::mat covmat
initialization in the vecchia.cpp
filepredict.dgp2
with return_all = TRUE
(replaced out
with object
- thanks Steven Barnett!)ll
in continue
functions (thanks Sebastien Coube!)What's new in version 1.1.1?
entropy_limit
in any of the predict
functions.return_all = TRUE
.predict
functions no longer return s2_smooth
or Sigma_smooth
. If desired, these quantities may be calculated by subtracting tau2 * g
from the diagonal.vecchia = TRUE
option may now utilize either the Matern (cov = "matern"
) or squared exponential kernel (cov = "exp2"
").cores = 1
in predict
, ALC
, and IMSE
functions (helps to avoid a SNOW conflict when running multiple instances on the same machine).fit_two_layer
, the intermediate latent layer may now have either a prior mean of zero (default) or a prior mean equal to x
(pmx = TRUE
). If pmx
is set to a constant, this will be the scale parameter on the inner Gaussian layer.What's new in version 1.1.0?
sep = TRUE
in fit_one_layer
to fit a GP with separable/anisotropic lengthscales.What's new in version 1.0.1?
What's new in version 1.0.0?
vecchia = TRUE
in fit functions) for faster computation. The speed of this implementation relies on OpenMP parallelization (make sure the -fopenmp
flag is present with package installation).tau2
is now calculated at the time of MCMC, not at the time of prediction. This avoids some extra calculations.What's new in version 0.3.0?
v = 0.5
, v = 1.5
, or v = 2.5
(default). The squared exponential kernel is still required for use with ALC and IMSE (set cov = "exp2"
in fit functions).EI = TRUE
inside predict
calls. EI calculations are nugget-free and are for minimizing the response (negate y
if maximization is desired).store_latent = TRUE
inside predict.Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.