| LKrig.MLE | R Documentation | 
Given a list of different covariance parameters for the Lattice Krig covariance model this function computes the likelihood or a profiled version (over lambda) and approximates a generalized cross-validation function at each of the parameter settings. This is an experimental function that has been productively used with a Latin hypercube design package to efficiently search through the LatticeKrig covariance parameter space.
LKrigFindLambda(x, y, Z = NULL, U = NULL, X = NULL, ..., LKinfo,
                 use.cholesky = NULL, lambda.profile = TRUE,
                 lowerBoundLogLambda = -16, tol = 0.005, verbose =
                 FALSE)
			    
LKrigFindLambdaAwght(x, y, ..., LKinfo, use.cholesky = NULL,
                 lowerBoundLogLambda = -16, upperBoundLogLambda = 4,
                 lowerBoundOmega = -3, upperBoundOmega = 0.75, factr =
                 1e+07, pgtol=1e-1, maxit = 15, verbose = FALSE)
                                 
LambdaAwghtObjectiveFunction(PARS, LKrigArgs, capture.env, verbose=FALSE )
LKrig.MLE( x,y,..., LKinfo, use.cholesky = NULL,
                    par.grid=NULL,
                    lambda.profile=TRUE,
                    verbose=FALSE,
                    lowerBoundLogLambda = -16,
                    nTasks = 1, taskID = 1,
                    tol = 0.005)
LKrig.make.par.grid(par.grid=NULL, LKinfo = NULL) 
omega2Awght (omega, LKinfo)
Awght2Omega (Awght, LKinfo)
| Awght | Value of Awght parameter to convert to omega form | 
| capture.env | The environment to save to the likelihood evaluation to the 
object  | 
| lambda.profile |  A logical value controlling whether the
likelihood is maximized over lambda.  For LKrigFindLambda if TRUE the
likelihood is maximized over lambda at the covariance values in
LKinfo and if FALSE the likelihood is just evaluated at LKinfo
including the lambda value in this list.  For LKrig.MLE if TRUE for
each set of parameters in par.grid the value of lambda is found that
maximizes the likelihood.  In this case the llambda value is the
starting value for the optimizer. If llammbda[k] is NA then the
lambda value found from the k-1 maximization is used as a starting
value for the k step. (In the source code this is
 | 
| LKinfo |  An LKinfo object that specifies the LatticeKrig
covariance. Usually this is obtained by a call to  | 
| LKrigArgs | Argument list to call LKrig. | 
| lowerBoundLogLambda | Lower limit for lambda in searching for MLE. | 
| lowerBoundOmega | Lower limit for omega in searching for MLE. | 
| maxit | Maximum number of iterations (passed to optim function) | 
| nTasks | If using Rmpi the number of slaves available. | 
| omega | Value of the omega parameter to convert to the a.wght format. | 
| PARS |  PAR[1]= log(lambda) and PAR[2]= .5*log(a.wght-4)
(also referred to as  | 
| par.grid | A list with components llambda, alpha, a.wght giving the different sets of parameters to evaluate. If M is the number of parameter setting to evaluate llambda is a vector length M and alpha and a.wght are matrices with M rows and nlevel columns. Thus, the kth trial has parameters par.grid$llambda[k] par.grid$alpha[k,] par.grid$a.wght[k,] Currently this function does not support passing a non-stationary spatial parameterization for alpha. The LKinfo object details the other parts of the covariance specification (e.g. number of levels, grid sizes) that do not change. Note that par.grid assumes ln lambda not lambda. See details below for some other features of the par.grid arguments. | 
| factr | Controls convergence for the BFGS-L method. ( passed to optim). | 
| pgtol | Mysterious tolerance for gradient convergence in L-BFGS. This seems to influence the number of iterations the most. | 
| tol | Tolerance on log likelihood use to determine convergence. | 
| taskID | If using Rmpi the slave id. | 
| verbose | If TRUE prints out intermediate results. | 
| upperBoundOmega | Upper limit for omega in searching for MLE. | 
| upperBoundLogLambda | Upper limit for log lambda in searching for MLE. | 
| use.cholesky | If not NULL then this object is used as the symbolic cholesky decomposition of the covariance matrix for computing the likelihood. | 
| U | U matrix if an inverse problem model. | 
| x | The spatial locations. | 
| X | X matrix if an inverse problem model. | 
| y | The observations. | 
| Z | Matrix of additional covariates for fixe part of the model. | 
| ... | Any arguments to be passed to LKrig. E.g.  | 
LKrigFindLambda: Uses a simple one dimensional optimizer 
optimize. To maximize the log likelihood for log lambda over the
range: llambda.start + [-8,5]. This function is used to determine lambda in 
LatticeKrig.
LKrigFindLambdaAwght: Uses a simple optimizer 
optim. To maximize the log likelihood for  lambda  and  a.wght over the
range. 
LKrig.MLE: This is a simple wrapper function to accomplish
repeated calls to the LKrig function to evaluate the profile
likelihood and/or to optimize the likelihood over the lambda
parameters. The main point is that maximization over the lambda
parameter (or equivalently for tau and sigma2) is the most important
and should be done before considering variation of other parameters.  If
lambda is specified then one has closed form expressions for tau,
sigma2 that can then be substituted back into the log full
likelihood. This operation that is the default throughout LatticeKrig
(and fields) can concentrate the likelihood on a reduced set
of parameters. The further refinement when lambda.profile==TRUE
is to maximize the concentrated likelihood over lambda and report
this result. This will be a profile likelihood over the remaining
parameters of the covariance.
The covariance/model parameters are alpha, a.wght, and log lambda and
are separate matrix or vector components of the par.grid
list. The cleanest version of this function would just require the
par.grid list, however, to be easier to use there are several
options to give partial information and let the function itself create
the master parameter list. For example, just a search over lambda
should be easy and not require creating par.grid outside the
function. To follow this option one can just give an LKinfo
object. The value for the lambda component in this object will be the
starting value with the default starting value being lambda =1.0.
In the second example below most of the coding is getting the grid of
parameters to search in the right form.  It is useful to normalize the
alpha parameters to sum to one so that the marginal variance of the
process is only parameterized by sigma2.  To make this easy to implement
there is the option to specify the alpha parameters in the form of a
mixture model so that the components are positive and add to one (the
gamma variable below). If a component gamma is passed as a
component of par.grid then this is assumed to be in the mixture
model form and the alpha weights are computed from this. Note that
gamma will be a matrix with (nlevel - 1) columns while
alpha has nlevel columns.
For those readers that use which.max these functions are natural
extensions and are handy for looking at interpolated surfaces of the
likelihood function.
which.max.matrix: Finds the maximum value in a matrix and returns the row/column index.
 which.max.image  Finds the maximum value in an image matrix and
returns the index and the corresponding grid values.
LKrig.make.par.grid: This is usually used as an internal function that converts the list of parameters in par.grid and the LKinfo object into an more complex data structure used by LKrig.MLE. Its returned value is a "list of lists" to make the search over different parameters combinations simple.
omega2Awght, Awght2omega Converts between the Awght parameter (the diagonal elements of the SAR matrix) and the omega parameter that provides an unconstrained range for optimization. The link is
Awght <- LKinfo$floorAwght + exp(omega) * (xDimension)
.
Note that LKinfo supplied the lower bound on the Awght because this is geometry/problem specific. For the 2-d rectangle this is 4.
LKrigFindLambda
| summary | Giving information on the optimization over lambda. | 
| LKinfo | Covariance information object. | 
| llambda.start,lambda.MLE | Initial and final values for lambda. | 
| lnLike.eval | Matrix with all values of log likelihood that were evaluated | 
| call | Calling arguments. | 
| Mc | Cholesky decomposition. | 
LKrig.MLE
| summary | A matrix with columns: effective degrees of freedom, ln
Profile likelihood, Generalized cross-validation function, MLE tau,
MLE sigma2, full likelihood and number of parameter evaluations. The
rows correspond to the different parameters in the rows of the
 | 
| par.grid | List of parameters used in search. Some parameters
might be filled in from the initial par.grid list passed and also from
 | 
| LKinfo | LKinfo list that was either passed or created. | 
| index.MLE | Index for case that has largest Likelihood value. | 
| index.GCV | Index for case that has largest GCV value. | 
| LKinfo.MLE | LKinfo list at the parameters with largest profile likelihood. | 
| lambda.MLE | Value of lambda from grid with largest profile likelihood. | 
| call | Calling sequence for this function. | 
which.max.matrix Returns a 2 column matrix with row and column index of maximum.
which.max.image For an object in image format returns components x,y,z giving the location and the maximum value for the image. Also included is the component ind that is the row and column indices for the maximum in the image matrix.
LKrig.make.par.grid Returns a list with components, alpha, a.wght. Each component is a list where each component of the list is a separate set of parameters. This more general format is useful for the non-stationary case when the parameters alpha might be a list of nlevel matrices.
Douglas Nychka
LKrig  LatticeKrig 
# 
# fitting summer precip for  sub region of North America (Florida)
# (tiny subregion is just to make this run under 5 seconds). 
# total precip in 1/10 mm for JJA 
  data(NorthAmericanRainfall)
# rename for less typing
  x<- cbind( NorthAmericanRainfall$longitude, NorthAmericanRainfall$latitude)
  y<- log10(NorthAmericanRainfall$precip)
# cut down the size of this data set so examples run quickly
  ind<- x[,1] > -90 & x[,2] < 35 #
  x<- x[ind,]
  y<- y[ind]
# This is a single level smoother
 
  LKinfo<- LKrigSetup(x,NC=4, nlevel=1, a.wght=5, alpha=1.0)
  lambdaFit<- LKrigFindLambda( x,y,LKinfo=LKinfo)
  lambdaFit$summary
## Not run: 
# grid search over parameters 
  NG<-15
  par.grid<- list( a.wght= rep( 4.05,NG),alpha= rep(1, NG),
                      llambda=  seq(-8,-2,,NG))
  lambda.search.results<-LKrig.MLE( x,y,LKinfo=LKinfo,
                                    par.grid=par.grid,
                                    lambda.profile=FALSE)
  lambda.search.results$summary
# profile likelihood
  plot( lambda.search.results$summary[,1:2], 
         xlab="effective degrees freedom",
         ylab="ln profile likelihood")
# fit at largest likelihood value:
  lambda.MLE.fit<- LKrig( x,y,
                    LKinfo=lambda.search.results$LKinfo.MLE)
## End(Not run)                    
                    
## Not run:                     
# optimizing  Profile likelihood over lambda using optim
# consider 3 values for a.wght (range parameter)
# in this case the log lambdas passed are the starting values for optim.
  NG<-3
  par.grid<- list( a.wght= c( 4.05,4.1,5) ,alpha= rep(1, NG),
                      llambda= c(-5,NA,NA))
# NOTE: NAs in llambda mean use the previous MLE for llambda as the
# current starting value. 
  LKinfo<- LKrigSetup(x,NC=12,nlevel=1, a.wght=5, alpha=1.0) 
  lambda.search.results<-LKrig.MLE(
                              x,y,LKinfo=LKinfo, par.grid=par.grid,
                              lambda.profile=TRUE)
  print(lambda.search.results$summary)
# note first result a.wght = 4.05 is the optimized result for the grid
# search given above.
## End(Not run)
########################################################################    
# search over two multi-resolution levels varying the  levels of alpha's
########################################################################
## Not run: 
# NOTE: search ranges found largely by trial and error to make this
# example work also the grid is quite coarse ( and NC is small) to
# be quick as a help file example
  data(NorthAmericanRainfall)
# rename for less typing
  x<- cbind( NorthAmericanRainfall$longitude, NorthAmericanRainfall$latitude)
# total precip in 1/10 mm for JJA 
 y<- log10(NorthAmericanRainfall$precip)
# cut down the size of this data set so examples run quickly
# examples also work with  the full data set. Also try NC= 100 for a
# nontrivial model.
  ind<- x[,1] > -90 & x[,2] < 35 #
  x<- x[ind,]
  y<- y[ind]
  
  Ndes<- 10  
# NOTE: this is set to be very small just to make this
#       example run fast
  set.seed(124)
  par.grid<- list()
# create grid of alphas to sum to 1 use a mixture model parameterization
#  alpha1 = (1/(1 + exp(gamma1)) ,
# alpha2 = exp( gamma1) / ( 1 + exp( gamma1))
# 
  par.grid$gamma<- cbind(runif( Ndes, -3,2), runif( Ndes, -3,2))
  par.grid$a.wght<- rep( 4.5, Ndes)
# log lambda grid search values
  par.grid$llambda<- runif( Ndes,-5,-3)  
  LKinfo1<- LKrigSetup( x, NC=5, nlevel=3, a.wght=5, alpha=c(1.0,.5,.25))
# NOTE: a.wght in call is not used. Also a better search is to profile over
#  llambda
 alpha.search.results<- LKrig.MLE( x,y,LKinfo=LKinfo1, par.grid=par.grid,
                                    lambda.profile=FALSE)
########################################################################
# Viewing the search results
########################################################################
# this scatterplot is good for a quick look because  effective degrees
# of freedom is a useful summary of fit. 
  plot( alpha.search.results$summary[,1:2], 
         xlab="effective degrees freedom",
         ylab="ln profile likelihood")
#
## End(Not run)
## Not run: 
# a two level model search 
# with profiling over lambda.
data(NorthAmericanRainfall)
# rename for less typing
  x<- cbind( NorthAmericanRainfall$longitude,
             NorthAmericanRainfall$latitude)
# mean total precip in 1/10 mm for JJA 
  y<- log10(NorthAmericanRainfall$precip)
#  This takes a few minutes
  Ndes<- 40 
  nlevel<-2 
  par.grid<- list()
## create grid of alphas to sum to 1 use a mixture model parameterization:
#    alpha1 = (1/(1 + exp(gamma1)) ,
#   alpha2 = exp( gamma1) / ( 1 + exp( gamma1))
  set.seed(123)
  par.grid$gamma<- runif( Ndes,-3,4)
## values for range (a.wght)
  par.grid$a.wght<- 4 + 1/exp(seq( 0,4,,Ndes))
# log lambda grid search values (these are the starting values)
  par.grid$llambda<- rep(-4, Ndes)
  LKinfo1<- LKrigSetup( x, NC=15, nlevel=nlevel, 
                          a.wght=5, alpha=rep( NA,2) ) 
##
## the search over the parameter list in par.grid  maximizing over lambda 
  search.results<- LKrig.MLE( x,y,LKinfo=LKinfo1, par.grid=par.grid,
                                 lambda.profile=TRUE)
# plotting results of likelihood search
set.panel(1,2)
 plot( search.results$summary[,1:2], 
         xlab="effective degrees freedom",
         ylab="ln profile likelihood")
 xtemp<- matrix(NA, ncol=2, nrow=Ndes)
 for( k in 1:Ndes){
   xtemp[k,] <- c( (search.results$par.grid$alpha[[k]])[1],
                  (search.results$par.grid$a.wght[[k]])[1] )
}
 quilt.plot( xtemp,search.results$summary[,2])
#  fit using Tps
 tps.out<- Tps(  xtemp,search.results$summary[,2], lambda=0)
 contour( predictSurface(tps.out), lwd=3,add=TRUE)
 set.panel()
## End(Not run)
## Not run: 
# searching over nu 
data(ozone2)
x<- ozone2$lon.lat
y<- ozone2$y[16,]
good<- !is.na(y)
y<- y[good]
x<- x[good,]
par.grid<- expand.grid( nu = c(.5,1.0, 1.5), a.wght= list(4.1,4.5,5) )
par.grid$llambda<- rep( NA, length(par.grid$nu))
LKinfo<- LKrigSetup(x,  nlevel=3, nu=.5, NC=5, a.wght=4.5)
out<- LKrig.MLE( x,y, LKinfo=LKinfo, par.grid=par.grid)
# take a look
cbind( par.grid, out$summary[,1:5])
## End(Not run)
## Not run: 
# an MLE fit taking advantage of replicated fields
# check based on simulated data
N<- 200
M<-50 # number of independent replicated fields
tau<- sqrt(.01)
set.seed(123)
x<- matrix( runif(N*2), N,2)
                
LKinfo<- LKrigSetup( x, NC=16, nlevel=1,
                 a.wght=4.5, lambda=.01, 
                fixed.Function=NULL,
                normalize=TRUE)  
                
# the replicate fields
truef<-  LKrig.sim(x,LKinfo=LKinfo, M=M )
set.seed(222)
error<- tau*matrix( rnorm(N*M), N,M)
y<- truef + error 
# with correct lambda
obj<- LKrig( x,y, LKinfo=LKinfo, lambda=.01, )
print( obj$tau.MLE.FULL)
print( obj$sigma2.MLE.FULL)
fitMLE1<- LKrigFindLambda( x,y, LKinfo=LKinfo)
fitMLE1$summary
aWghtGrid<-  c( 4.01, 4.05, 4.1, 4.2, 4.5, 4.6, 4.7, 5, 8, 16)
par.grid<- list( a.wght = aWghtGrid)
fitMLE2<- LKrig.MLE( x,y, LKinfo=LKinfo,
                      par.grid= par.grid )
fitMLE2$summary   
LKinfo1<- LKinfoUpdate( LKinfo, lambda=.1, a.wght= 4.2)                   
fitMLE4<- LKrigFindLambdaAwght( x,y, LKinfo=LKinfo1)
fitMLE4$summary
plot(  log( aWghtGrid -4)/2, fitMLE2$summary[,2], type="b",
  xlab="log( a.wght - 4)/2",
  ylab= "log Profile likelihood" )
points( log(fitMLE4$summary["a.wght.MLE"] -4)/2,
     fitMLE4$summary["lnProfLike"], pch="+", col="red"  )
xline( log(fitMLE4$summary["a.wght.MLE"] -4)/2, col="red", lty=2)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.