GP_C: Gaussian process calibration

View source: R/GP_C.R

GP_CR Documentation

Gaussian process calibration

Description

Provides a Gaussian process calibrated on experiment design X with data Y and hyperparameters lambda. Different base regression functions are available.

Usage

GP_C(X, Y, lambda, regress = "linear")

Arguments

X

Matrix of n lines corresponding to experiment members, and with m columns corresponding to different inputs. Can be one column, but always need to have this matrix form (use as.matrix if needed )

Y

Currently the Gaussian process is univariate. So, vector with n elements corresponding to outputs.

covar

Minus the covariance function. Defaults to exp.

lambda

List made of (1) a vector codetheta, with $m$ elements corresponding to roughness lengths associated with input variables, and (2)

regress

One of constant, linear or quadratic

Value

List of

betahat

Linear regression coefficients (posterior mode)

sigma_hat_2

Simulator variance (posterior mode)

R

Design Covariance matrice (nugget free)

Rt

Design Covariance matrix (with nugget acconted for)

muX

Input matrix (regression function applied)

X,Y,lambda, funcmu

same as inputs

R1X, R1tX

Output dummies used by GP_P

log_REML, loc_pen_REML,nbrr

(penalised) log-likelihood

Author(s)

Michel Crucifix

References

Jeremy Oakley and Anthony O\'Hagan, Bayesian Inference for the Uncertainty Distribution of Computer Model Outputs, Biometrika, 89, 769–784 2002

Ioannis Andrianakis and Peter G. Challenor, The effect of the nugget on Gaussian process emulators of computer models, Computational Statistics \& Data Analysis, 56, 4215–4228 2012

See Also

L1O, GP_P

Examples

 # univariate example
 X <- matrix(c(1,2,3,4,5,6,7), 7, 1)
 Y <- c(1.1, 2.1, 4.7, 1.3, 7.2, 8, 9)
 
 
 # will attempt to optimize lambda for different
 # models and give the resulting log-likelihood
 # assumes no nugget 

# optimises the log to guarantee positiveness

  loglik <- function(theta)
 {
   -GP_C(X, Y, lambda=list(theta=exp(theta), nugget=0.0), regress=regress)$log_REML
 }

 for (r in c('constant','linear','quadratic'))
 {
  regress=r
  o = optimize(loglik, c(-1,1))
  print (sprintf (" Model %s, Lambda=%f, LogLik = %f \n ", r, exp(o$minimum), 
  -o$objective) ) 
 }


mcrucifix/gp documentation built on July 29, 2023, 8:58 p.m.