validate.SAVE: Validation of a computer model

Description Usage Arguments Details Value Author(s) References Examples

Description

Assessing the validity of a computer model at a given set of controllable inputs

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
## S4 method for signature 'SAVE'
validate(object, newdesign, calibration.value="mean", prob=0.90, 
						  n.burnin=0, n.thin=1, tol=1E-10, ...)

## S4 method for signature 'validate.SAVE'
summary(object)

## S4 method for signature 'summary.validate.SAVE'
show(object)

## S4 method for signature 'validate.SAVE'
plot(x, ...)

Arguments

object

An object of corresponding signature.

x

An object of class validate.SAVE.

newdesign

A named matrix containing the points (controllable inputs) where predictions are to be performed. Column names should contain the object@controllablenames. This parameter should be set to NULL in the situation with constant controllable inputs.

calibration.value

Either the summary of the posterior distribution of the calibration parameters which is taken as the true value for the computer model (possible values are "mean" and "median") or a named data frame with numerical values to be used for the calibration parameters.

prob

The probability used for the representation of the credible intervals.

n.burnin

The burn-in to be applied (see details below).

n.thin

The thinning to be applied (see details below).

tol

The tolerance in the Cholesky decomposition.

...

Additional arguments to be passed, still not implemented.

Details

Following the framework for the analysis of computer models by Bayarri et al (2007), validation of a computer model translates to the question: is the computer model producing results that are accurate enough for its intended use?

Answering this question implies the comparison of the responses of the computer model at the ‘true’ value of the calibration parameters (calibration.value) with reality. This comparison should be performed at the set of controllable inputs that are of interest for the researcher (which in turn represent the ‘intended use’ for the computer model) and that are passed as the argument newdesign to the function.

For this comparison to be performed, validation returns a matrix (@validate) containing (for each one of the input values in the rows of this matrix) the prediction of reality (column called "bias.corrected") jointly with the estimate of the model response, both with corresponding accompanying tolerance bounds (columns called "tau.bc" and "tau.pm" respectively). These measures should be interpreted as:

Prob(|estimate-real value|<tau)=prob

Also, the discrepancy between computer model and reality can be assessed through the estimated bias (column called "bias") with the associated 100prob% credible interval (columns called "bias.Lower" and "bias.Upper").

In the calculations, the simulated sample from the posterior distribution contained in object@mcmsample is used. It is possible to discard some of the samples in this matrix by not considering the first n.burnin and/or thinning by n.thin.

The results can be conveniently visualized with the functions summary and plot.

Value

Returns an S4 object of class validate.SAVE with the following slots:

bayesfitcall:

The call to bayesfit.

call:

The original call to SAVE function.

newdesign:

A copy of the design given.

validate:

A matrix with the results of the validation analysis (see details below).

validatecall:

The call to the validate function.

Author(s)

Jesus Palomo, Rui Paulo and Gonzalo Garcia-Donato

References

Palomo J, Paulo R, Garcia-Donato G (2015). SAVE: An R Package for the Statistical Analysis of Computer Models. Journal of Statistical Software, 64(13), 1-23. Available from http://www.jstatsoft.org/v64/i13/

Bayarri MJ, Berger JO, Paulo R, Sacks J, Cafeo JA, Cavendish J, Lin CH, Tu J (2007). A Framework for Validation of Computer Models. Technometrics, 49, 138-154.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
## Not run: 
library(SAVE)

#############
# load data
#############

data(spotweldfield,package='SAVE')
data(spotweldmodel,package='SAVE')

##############
# create the SAVE object which describes the problem and
# compute the corresponding mle estimates
##############

gfsw <- SAVE(response.name="diameter", controllable.names=c("current", "load", "thickness"), 
			 calibration.names="tuning", field.data=spotweldfield, 
			 model.data=spotweldmodel, mean.formula=~1, 
			 bestguess=list(tuning=4.0))

##############
# obtain the posterior distribution of the unknown parameters 
##############

gfsw <- bayesfit(object=gfsw, prior=c(uniform("tuning", upper=8, lower=0.8)), 
				 n.iter=20000, n.burnin=100, n.thin=2)

##############
# validate the computer model at chosen set of controllable
# inputs
###############

load <- c(4.0,5.3)
curr <- seq(from=20,to=30,length=20)
g <- c(1,2)

xnew <- expand.grid(current = curr, load = load, thickness=g)

valsw <- validate(object=gfsw,newdesign=xnew,n.burnin=100)

# summary of results
summary(valsw)
# plot results
plot(valsw)


	
## End(Not run)

SAVE documentation built on May 2, 2019, 6:10 a.m.