Uses optimization techniques (either NelderMead or simulated annealing)
to find the optimal scales, or roughness lengths. Function
optimal.scale()
(ie singular) finds the optimal scale on the
assumption that the roughness is isotropic so all scales are identical.
1 2 3 4  optimal.scales(val, scales.start, d, use.like = TRUE, give.answers =
FALSE, func=regressor.basis, ...)
optimal.scale(val, d, use.like = TRUE, give.answers =
FALSE, func=regressor.basis, ...)

val 
Matrix with rows corresponding to points at which the function is known 
scales.start 
Initial guess for the scales (plural). See details section for explanation 
d 
vector of observations, one for each row of 
use.like 
Boolean, with default 
give.answers 
Boolean, with default 
func 
Function used to determine basis vectors, defaulting
to 
... 
Extra parameters passed to 
Internally, this function works with the logarithms of the roughness
lengths, because they are inherently positive. However, note that the
lengths themselves must be supplied to argument scales.start
,
not their logarithms.
The reason that there are two separate functions is that
optim()
and optimize()
are vey different.
If give.answers
takes the default value of FALSE
, a
vector of roughness lengths is returned. If TRUE
, output from
optim()
is returned directly (note that element par
is
the logarithm of the desired roughness length as the optimization
routine operates with the logs of the lengths as detailed above)
This function is slow to evaluate because it needs to
calculate and invert A
each time it is called, because the
scales change from call to call.
In this package, “scales” means the diagonal elements of the
B matrix. See the help page for corr
for more
discussion of this topic.
Note the warning about partial matching under the “dotdotdot”
argument in both optim.Rd
[used in optimal.scales()
] and
optimize.Rd
[used in optimal.scale()
]: any unmatched
arguments will be passed to the objective function. Thus, passing
named but unmatched arguments to optimal.scale[s]()
will cause
an error, because those arguments will be passed, by optim()
or
optimize()
, to the (internal) objective.fun()
.
In particular, note that passing control=list(maxit=4)
to
optimal.scale()
will cause an error for this reason
[optimize()
does not take a control
argument].
Robin K. S. Hankin
J. Oakley 2004. Estimating percentiles of uncertain computer code outputs. Applied Statistics, 53(1), pp8993.
J. Oakley 1999. Bayesian uncertainty analysis for complex computer codes, PhD thesis, University of Sheffield.
interpolant
,corr
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38  ##First, define some scales:
fish < c(1,1,1,1,1,4)
## and a sigmasquared value:
REAL.SIGMASQ < 0.3
## and a real relation:
real.relation < function(x){sum( (1:6)*x )}
## Now a design matrix:
val < latin.hypercube(100,6)
## apply the real relation:
d < apply(val,1,real.relation)
## and add some suitably correlated Gaussian noise:
A < corr.matrix(val,scales=fish)
d.noisy < as.vector(rmvnorm(n=1,mean=apply(val,1,real.relation),REAL.SIGMASQ*A))
## Now see if we can estimate the roughness lengths well. Remember that
## the true values are those held in vector "fish":
optimal.scales(val=val, scales.start=rep(1,6), d=d.noisy,
method="SANN",control=list(trace=1000,maxit=3),
give=FALSE)
# Now a test of optimal.scale(), where there is only a single roughness
# scale to estimate. This should be more straightforward:
df < latin.hypercube(40,6)
fish2 < rep(2,6)
A2 < corr.matrix(df,scales=fish2)
d.noisy < as.vector(rmvnorm(n=1, mean=apply(df,1,real.relation), sigma=A2))
jj.T < optimal.scale(val=df,d=d.noisy,use.like=TRUE)
jj.F < optimal.scale(val=df,d=d.noisy,use.like=FALSE)

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
All documentation is copyright its authors; we didn't write any of that.