AIC.secr: Compare SECR Models

AIC.secrR Documentation

Compare SECR Models

Description

Terse report on the fit of one or more spatially explicit capture–recapture models. Models with smaller values of AIC (Akaike's Information Criterion) are preferred. Extraction ([) and logLik methods are included.

Usage

## S3 method for class 'secr'
AIC(object, ..., sort = TRUE, k = 2, dmax = 10, criterion = c("AICc","AIC"), chat = NULL)
## S3 method for class 'secrlist'
AIC(object, ..., sort = TRUE, k = 2, dmax = 10, criterion = c("AICc","AIC"), chat = NULL)
## S3 method for class 'secr'
logLik(object, ...)
secrlist(...)
## S3 method for class 'secrlist'
x[i]

Arguments

object

secr object output from the function secr.fit, or a list of such objects with class c("secrlist", "list")

...

other secr objects

sort

logical for whether rows should be sorted by ascending AICc

k

numeric, penalty per parameter to be used; always k = 2 in this method

dmax

numeric, maximum AIC difference for inclusion in confidence set

criterion

character, criterion to use for model comparison and weights

chat

numeric optional variance inflation factor for quasi-AIC

x

secrlist

i

indices

Details

Models to be compared must have been fitted to the same data and use the same likelihood method (full vs conditional). From version 4.1 a warning is issued if AICcompatible reveals a problem.

AIC is given by

\mbox{AIC} = -2\log(L(\hat{\theta})) + 2K

where K is the number of "beta" parameters estimated.

AIC with small sample adjustment is given by

\mbox{AIC}_c = -2\log(L(\hat{\theta})) + 2K + \frac{2K(K+1)}{n-K-1}.

The sample size n is the number of individuals observed at least once (i.e. the number of rows in capthist).

Model weights are calculated as

w_i = \frac{\exp(-\Delta_i / 2),}{ \sum{\exp(-\Delta_i / 2)}}

where \Delta refers to differences in AIC or AICc depending on the argument ‘criterion’. AICc is widely used, but AIC may be better (Fletcher 2018, p. 60).

Models for which delta > dmax are given a weight of zero and are excluded from the summation. Model weights may be used to form model-averaged estimates of real or beta parameters with modelAverage (see also Buckland et al. 1997, Burnham and Anderson 2002).

The argument k is included for consistency with the generic method AIC.

secrlist forms a list of fitted models (an object of class ‘secrlist’) from the fitted models in .... Arguments may include secrlists. If secr components are named the model names will be retained (see Examples).

If chat (\hat c) is provided then quasi-AIC values are computed (secr >= 4.6.0):

\mbox{QAIC} = -2\log(L(\hat{\theta}))/ \hat c + 2K.

Value

A data frame with one row per model. By default, rows are sorted by ascending 'criterion' (default AICc).

model

character string describing the fitted model

detectfn

shape of detection function fitted (halfnormal vs hazard-rate)

npar

number of parameters estimated

logLik

maximized log likelihood

AIC

Akaike's Information Criterion

AICc

AIC with small-sample adjustment of Hurvich & Tsai (1989)

And depending on criterion:

dAICc

difference between AICc of this model and the one with smallest AICc

AICcwt

AICc model weight

or

dAIC

difference between AIC of this model and the one with smallest AIC

AICwt

AIC model weight

logLik.secr returns an object of class ‘logLik’ that has attribute df (degrees of freedom = number of estimated parameters).

If the variance inflation factor 'chat' is provided then outputs AIC, AICc etc. are replaced by the corresponding quasi-AIC values labelled QAIC, QAICc etc.

Note

It is not be meaningful to compare models by AIC if they relate to different data (see AICcompatible).

Specifically:

  • an ‘secrlist’ generated and saved to file by mask.check may be supplied as the object argument of AIC.secrlist, but the results are not informative

  • models fitted by the conditional likelihood (CL = TRUE) and full likelihood (CL = FALSE) methods cannot be compared

  • hybrid mixture models (using hcov argument of secr.fit) should not be compared with other models

  • grouped models (using groups argument of secr.fit) should not be compared with other models

  • multi-session models should not be compared with single-session models based on the same data.

A likelihood-ratio test (LR.test) is a more direct way to compare two models.

The issue of goodness-of-fit and possible adjustment of AIC for overdispersion has yet to be addressed (cf QAIC in MARK).

From version 2.6.0 the user may select between AIC and AICc for comparing models, whereas previously only AICc was used and AICc weights were reported as ‘AICwt’). There is evidence that AIC may be better for model averaging even when samples are small sizes - Turek and Fletcher (2012).

References

Buckland S. T., Burnham K. P. and Augustin, N. H. (1997) Model selection: an integral part of inference. Biometrics 53, 603–618.

Burnham, K. P. and Anderson, D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. Second edition. New York: Springer-Verlag.

Fletcher, D. (2019) Model averaging. SpringerBriefs in Statistics. Berlin: Springer-Verlag.

Hurvich, C. M. and Tsai, C. L. (1989) Regression and time series model selection in small samples. Biometrika 76, 297–307.

Turek, D. and Fletcher, D. (2012) Model-averaged Wald confidence intervals. Computational statistics and data analysis 56, 2809–2815.

See Also

AICcompatible, modelAverage, AIC, secr.fit, print.secr, score.test, LR.test, deviance.secr

Examples

## Compare two models fitted previously
## secrdemo.0 is a null model
## secrdemo.b has a learned trap response

AIC(secrdemo.0, secrdemo.b)

## Form secrlist and pass to AIC.secr
temp <- secrlist(null = secrdemo.0, learnedresponse = secrdemo.b)
AIC(temp)


secr documentation built on Oct. 18, 2023, 1:07 a.m.