GP_C | R Documentation |
Provides a Gaussian process calibrated on experiment design X with data Y and hyperparameters lambda. Different base regression functions are available.
GP_C(X, Y, lambda, regress = "linear")
X |
Matrix of |
Y |
Currently the Gaussian process is univariate. So, vector
with |
covar |
Minus the covariance function. Defaults to |
lambda |
List made of (1) a vector codetheta, with $m$ elements corresponding to roughness lengths associated with input variables, and (2) |
regress |
One of |
List of
betahat |
Linear regression coefficients (posterior mode) |
sigma_hat_2 |
Simulator variance (posterior mode) |
R |
Design Covariance matrice (nugget free) |
Rt |
Design Covariance matrix (with nugget acconted for) |
muX |
Input matrix (regression function applied) |
X,Y,lambda, funcmu |
same as inputs |
R1X, R1tX |
Output dummies used by |
log_REML, loc_pen_REML,nbrr |
(penalised) log-likelihood |
Michel Crucifix
Jeremy Oakley and Anthony O\'Hagan, Bayesian Inference for the Uncertainty Distribution of Computer Model Outputs, Biometrika, 89, 769–784 2002
Ioannis Andrianakis and Peter G. Challenor, The effect of the nugget on Gaussian process emulators of computer models, Computational Statistics \& Data Analysis, 56, 4215–4228 2012
L1O
, GP_P
# univariate example
X <- matrix(c(1,2,3,4,5,6,7), 7, 1)
Y <- c(1.1, 2.1, 4.7, 1.3, 7.2, 8, 9)
# will attempt to optimize lambda for different
# models and give the resulting log-likelihood
# assumes no nugget
# optimises the log to guarantee positiveness
loglik <- function(theta)
{
-GP_C(X, Y, lambda=list(theta=exp(theta), nugget=0.0), regress=regress)$log_REML
}
for (r in c('constant','linear','quadratic'))
{
regress=r
o = optimize(loglik, c(-1,1))
print (sprintf (" Model %s, Lambda=%f, LogLik = %f \n ", r, exp(o$minimum),
-o$objective) )
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.