AIC.openCR: Compare openCR Models

AIC.openCRR Documentation

Compare openCR Models

Description

Terse report on the fit of one or more spatially explicit capture–recapture models. Models with smaller values of AIC (Akaike's Information Criterion) are preferred.

Usage


## S3 method for class 'openCR'
AIC(object, ..., sort = TRUE, k = 2, dmax = 10,  use.rank = FALSE,
                        svtol = 1e-5, criterion = c('AIC','AICc'), n = NULL)

## S3 method for class 'openCRlist'
AIC(object, ..., sort = TRUE, k = 2, dmax = 10,  use.rank = FALSE,
                        svtol = 1e-5, criterion = c('AIC','AICc'), n = NULL)

## S3 method for class 'openCR'
logLik(object, ...)

Arguments

object

openCR object output from the function openCR.fit, or openCRlist

...

other openCR objects

sort

logical for whether rows should be sorted by ascending AICc

k

numeric, the penalty per parameter to be used; always k = 2 in this method

dmax

numeric, the maximum AIC difference for inclusion in confidence set

use.rank

logical; if TRUE the number of parameters is based on the rank of the Hessian matrix

svtol

minimum singular value (eigenvalue) of Hessian used when counting non-redundant parameters

criterion

character, criterion to use for model comparison and weights

n

integer effective sample size

Details

Models to be compared must have been fitted to the same data and use the same likelihood method (full vs conditional).

AIC with small sample adjustment is given by

AICc = -2log(L(theta-hat)) + 2K + 2K(K+1)/(n-K-1)

where K is the number of “beta" parameters estimated. By default, the effective sample size n is the number of individuals observed at least once (i.e. the number of rows in capthist). This differs from the default in MARK which for CJS models is the sum of the sizes of release cohorts (see m.array).

Model weights are calculated as

w_i = exp(-dAICc_i / 2) / sum{ exp(-dAICc_i / 2) }

Models for which dAIC > dmax are given a weight of zero and are excluded from the summation. Model weights may be used to form model-averaged estimates of real or beta parameters with modelAverage (see also Buckland et al. 1997, Burnham and Anderson 2002).

The argument k is included for consistency with the generic method AIC.

Value

A data frame with one row per model. By default, rows are sorted by ascending AIC.

model

character string describing the fitted model

npar

number of parameters estimated

rank

rank of Hessian

logLik

maximized log likelihood

AIC

Akaike's Information Criterion

AICc

AIC with small-sample adjustment of Hurvich & Tsai (1989)

dAICc

difference between AICc of this model and the one with smallest AIC

AICwt

AICc model weight

logLik.openCR returns an object of class ‘logLik’ that has attribute df (degrees of freedom = number of estimated parameters).

Note

The default criterion is AIC, not AICc as in secr 3.1.

Computed values differ from MARK for various reasons. MARK uses the number of observations, not the number of capture histories when computing AICc. It is also likely that MARK will count parameters differently.

It is not be meaningful to compare models by AIC if they relate to different data.

The issue of goodness-of-fit and possible adjustment of AIC for overdispersion has yet to be addressed (cf QAIC in MARK).

References

Buckland S. T., Burnham K. P. and Augustin, N. H. (1997) Model selection: an integral part of inference. Biometrics 53, 603–618.

Burnham, K. P. and Anderson, D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. Second edition. New York: Springer-Verlag.

Hurvich, C. M. and Tsai, C. L. (1989) Regression and time series model selection in small samples. Biometrika 76, 297–307.

See Also

AIC, openCR.fit, print.openCR, LR.test

Examples


## Not run: 
m1 <- openCR.fit(ovenCH, type = 'JSSAf')
m2 <- openCR.fit(ovenCH, type = 'JSSAf', model = list(p~session))
AIC(m1, m2)

## End(Not run)


openCR documentation built on Sept. 25, 2022, 5:06 p.m.