InfoCritCompare: Compare 'glmssn' Information Criteria

View source: R/InfoCritCompare.R

InfoCritCompareR Documentation

Compare glmssn Information Criteria

Description

InfoCritCompare displays important model criteria for each object of class glmssn object in the model list.

Usage

InfoCritCompare(model.list)

Arguments

model.list

a list of fitted glmssn-class model objects in the form list(model1, model2, ...)

Details

InfoCritCompare displays important model criteria that can be used to compare and select spatial statistical models. For instance, spatial models can be compared with non-spatial models, other spatial models, or both.

Value

InfoCritCompare returns a data.frame of the model criteria for each specified glmssn-class object. These are useful for comparing and selecting models. The columns in the data.frame are described below. In the description below 'obs' is an observed data value, 'pred' is its prediction using cross-validation, and 'predSE' is the prediction standard error using cross-validation.

formula

model formula

EstMethod

estimation method, either maximum likelihood (ML) or restricted maximum likelihood (REML)

Variance_Components

names of the variance components, including the autocovariance model names, the nugget effect, and the random effects.

neg2Log

-2 log-likelihood. Note that the neg2LogL is only returned if the Gaussian distribution (default) was specified when creating the glmssn object.

AIC

Akaike Information Criteria (AIC). Note that AIC is only returned if the Gaussian distribution (default) was specified when creating the glmssn object.

bias

bias, computed as mean(obs - pred).

std.bias

standardized bias, computed as mean((obs - pred)/predSE).

RMSPE

root mean-squared prediction error, computed as sqrt(mean((obs - pred)^2))

RAV

root average variance, computed as sqrt(mean(predSE^2)). If the prediction standard errors are being estimated well, this should be close to RMSPE.

std.MSPE

standardized mean-squared prediction error, computed as mean(((obs - pred)/predSE)^2). If the prediction standard errors are being estimated well, this should be close to 1.

cov.80

the proportion of times that the observed value was within the prediction interval formed from pred +- qt(.9, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.8 for large sample sizes.

cov.90

the proportion of times that observed value was within the prediction interval formed from pred +- qt(.95, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.9 for large sample sizes.

cov.95

the proportion of times that the observed value was within the prediction interval formed from pred +- qt(.975, df)*predSE, where qt is the quantile t function, and df is the number of degrees of freedom. If there is little bias and the prediction standard errors are being estimated well, this should be close to 0.95 for large sample sizes.

Author(s)

Jay Ver Hoef support@SpatialStreamNetworks.com

See Also

glmssn, summary.glmssn, AIC, CrossValidationStatsSSN

Examples


	library(SSN)
	data(modelFits)
	#for examples only, make sure all models have the correct path
	#if you use importSSN(), path will be correct
	fitNS$ssn.object <- updatePath(fitNS$ssn.object, 
		paste0(tempdir(),'/MiddleFork04.ssn'))
	fitRE$ssn.object <- updatePath(fitRE$ssn.object, 
		paste0(tempdir(),'/MiddleFork04.ssn'))
	fitSp$ssn.object <- updatePath(fitSp$ssn.object, 
		paste0(tempdir(),'/MiddleFork04.ssn'))
	fitSpRE1$ssn.object <- updatePath(fitSpRE1$ssn.object, 
		paste0(tempdir(),'/MiddleFork04.ssn'))
	fitSpRE2$ssn.object <- updatePath(fitSpRE2$ssn.object, 
		paste0(tempdir(),'/MiddleFork04.ssn'))

  compare.models <- InfoCritCompare(list(fitNS, fitRE, fitSp, fitSpRE1, fitSpRE2))
  
  # Examine the model criteria
  compare.models

  # Compare the AIC values for all models with random effects
  compare.models[c(2,4,5),c("Variance_Components","AIC")]
  
  # Compare the RMSPE for the spatial models
  compare.models[c(3,4,5),c("Variance_Components","RMSPE")]
  
  # Compare the RMSPE between spatial and non-spatial models
  compare.models[c(1,3),c("formula","Variance_Components", "RMSPE")]


SSN documentation built on March 7, 2023, 5:30 p.m.