Description Usage Arguments Value Author(s) References See Also Examples
View source: R/AdditiveKriging.R
Constrained MLE optimization for kernels defined by cliques using constrOptim
1 2 | kmAdditive(x, y, n.initial.tries = 50, limits = NULL, eps.R = 1e-08, cl,
covtype = "gauss", eps.Var = 1e-06, max.it = 1000, iso = FALSE)
|
x |
a design matrix of input variables, number of columns should be number of variables |
y |
a vector of output variables of the same length as the columns of |
n.initial.tries |
number of random initial parameters for optimization, defaults to 50 |
limits |
a list with items lower, upper containing boundaries for the covariance parameter vector theta, if |
eps.R |
small positive number indicating the nugget effect added to the covariance matrix diagonalk, defaults to |
cl |
list of cliques, can be obtained by function |
covtype |
an optional character string specifying the covariance structure to be used,
to be chosen between "gauss", "matern5_2", "matern3_2", "exp" or "powexp" (see |
eps.Var |
small positive number providing the limits for the alpha parameters in order to guarantee strict inequalities (0+eps.Var <= alpha <= 1-esp.Var), defaults to |
max.it |
maximum number of iterations for optimization, defaults to |
iso |
boolean vector indicating for each clique if it is isotropic (TRUE) or anisotropic (FALSE), defaults to |
list of estimated parameter 'alpha' and 'theta' corresponding to the clique structure in 'cl'
T. Muehlenstadt, O. Roustant, J. Fruth
Muehlenstaedt, T.; Roustant, O.; Carraro, L.; Kuhnt, S. (2011) Data-driven Kriging models based on FANOVA-decomposition, Statistics and Computing.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | ### example for ishigami function with cliques {1,3} and {2}
d <- 3
x <- matrix(runif(100*d,-pi,pi),nc=d)
y <- ishigami.fun(x)
cl <- list(c(2), c(1,3))
# constrained ML optimation with kernel defined by the cliques
parameter <- kmAdditive(x, y, cl = cl)
# prediction with the new model
xpred <- matrix(runif(500 * d,-pi,pi), ncol = d)
ypred <- predictAdditive(xpred, x, y, parameter, cl=cl)
yexact <- ishigami.fun(xpred)
# rmse
sqrt(mean((ypred[,1]- yexact)^2))
# scatterplot
par(mfrow=c(1,1))
plot(yexact, ypred[,1], asp = 1)
abline(0, 1)
### compare to one single clique {1,2,3}
cl <- list(c(1,2,3))
# constrained ML optimation with kernel defined by the cliques
parameter <- kmAdditive(x, y, cl = cl)
# prediction with the new model
ypred <- predictAdditive(xpred, x, y, parameter, cl=cl)
# rmse
sqrt(mean((ypred$mean- yexact)^2))
# scatterplot
par(mfrow=c(1,1))
plot(yexact, ypred$mean, asp = 1)
abline(0, 1)
### isotropic cliques
cl <- list(c(2),c(1,3))
parameter <- kmAdditive(x, y, cl = cl, iso=c(FALSE,TRUE))
ypred <- predictAdditive(xpred, x, y, parameter, cl=cl, iso=c(FALSE,TRUE))
sqrt(mean((ypred$mean- yexact)^2))
# the same since first clique has length 1
parameter <- kmAdditive(x, y, cl = cl, iso=c(TRUE,TRUE))
ypred <- predictAdditive(xpred, x, y, parameter, cl=cl, iso=c(TRUE,TRUE))
sqrt(mean((ypred$mean- yexact)^2))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.