Description Usage Arguments Details Value Warning Author(s) References See Also Examples
Three plots are currently available, based on the leaveOneOut.km
results: one plot of fitted values against response values, one plot of standardized residuals, and one qqplot of standardized residuals.
1 2 |
x |
an object of class "km" without noisy observations. |
y |
not used. |
kriging.type |
an optional character string corresponding to the kriging family, to be chosen between simple kriging ("SK") or universal kriging ("UK"). |
trend.reestim |
should the trend be reestimated when removing an observation? Default to FALSE. |
... |
no other argument for this method. |
The diagnostic plot has not been implemented yet for noisy observations. The standardized residuals are defined by ( y(xi) - yhat_{-i}(xi) ) / sigmahat_{-i}(xi)
, where y(xi)
is the response at the point xi
, yhat_{-i}(xi)
is the fitted value when removing the observation xi
(see leaveOneOut.km
), and sigmahat_{-i}(xi)
is the corresponding kriging standard deviation.
A list composed of:
mean |
a vector of length n. The ith coordinate is equal to the kriging mean (including the trend) at the ith observation number when removing it from the learning set, |
sd |
a vector of length n. The ith coordinate is equal to the kriging standard deviation at the ith observation number when removing it from the learning set, |
where n is the total number of observations.
Kriging parameters are not re-estimated when removing one observation. With few points, the re-estimated values can be far from those obtained with the entire learning set. One option is to reestimate the trend coefficients, by setting trend.reestim=TRUE
.
O. Roustant, D. Ginsbourger, Ecole des Mines de St-Etienne.
N.A.C. Cressie (1993), Statistics for spatial data, Wiley series in probability and mathematical statistics.
J.D. Martin and T.W. Simpson (2005), Use of kriging models to approximate deterministic computer models, AIAA Journal, 43 no. 4, 853-863.
M. Schonlau (1997), Computer experiments and global optimization, Ph.D. thesis, University of Waterloo.
predict,km-method
, leaveOneOut.km
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | # A 2D example - Branin-Hoo function
# a 16-points factorial design, and the corresponding response
d <- 2; n <- 16
fact.design <- expand.grid(seq(0,1,length=4), seq(0,1,length=4))
fact.design <- data.frame(fact.design); names(fact.design)<-c("x1", "x2")
branin.resp <- data.frame(branin(fact.design)); names(branin.resp) <- "y"
# kriging model 1 : gaussian covariance structure, no trend,
# no nugget effect
m1 <- km(~.^2, design=fact.design, response=branin.resp, covtype="gauss")
plot(m1) # LOO without parameter reestimation
plot(m1, trend.reestim=TRUE) # LOO with trend parameters reestimation
# (gives nearly the same result here)
|
optimisation start
------------------
* estimation method : MLE
* optimisation method : BFGS
* analytical gradient : used
* trend model : ~x1 + x2 + x1:x2
* covariance model :
- type : gauss
- nugget : NO
- parameters lower bounds : 1e-10 1e-10
- parameters upper bounds : 2 2
- best initial criterion value(s) : -74.39968
N = 2, M = 5 machine precision = 2.22045e-16
At X0, 0 variables are exactly at the bounds
At iterate 0 f= 74.4 |proj g|= 0.68076
At iterate 1 f = 73.97 |proj g|= 0.6247
At iterate 2 f = 73.538 |proj g|= 1.4091
At iterate 3 f = 72.79 |proj g|= 1.3821
At iterate 4 f = 72.065 |proj g|= 1.2986
At iterate 5 f = 72.004 |proj g|= 0.75998
At iterate 6 f = 71.998 |proj g|= 0.081582
At iterate 7 f = 71.997 |proj g|= 0.0030562
At iterate 8 f = 71.997 |proj g|= 1.3045e-05
iterations 8
function evaluations 10
segments explored during Cauchy searches 9
BFGS updates skipped 0
active bounds at final generalized Cauchy point 1
norm of the final projected gradient 1.30452e-05
final function value 71.9975
F = 71.9975
final value 71.997485
converged
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.