score.test | R Documentation |
Compute score tests comparing a fitted model and a more general alternative model.
score.test(secr, ..., betaindex = NULL, trace = FALSE, ncores = NULL, .relStep = 0.001,
minAbsPar = 0.1)
score.table(object, ..., sort = TRUE, dmax = 10)
secr |
fitted secr model |
... |
one or more alternative models OR a fitted secr model |
trace |
logical. If TRUE then output one-line summary at each evaluation of the likelihood |
ncores |
integer number of threads for parallel processing |
.relStep |
see |
minAbsPar |
see |
betaindex |
vector of indices mapping fitted values to parameters in the alternative model |
object |
score.test object or list of such objects |
sort |
logical for whether output rows should be in descending order of AIC |
dmax |
threshold of dAIC for inclusion in model set |
Score tests allow fast model selection (e.g. Catchpole & Morgan 1996).
Only the simpler model need be fitted. This implementation uses the
observed information matrix, which may sometimes mislead (Morgan et al.
2007). The gradient and second derivative of the likelihood function are
evaluated numerically at the point in the parameter space of the second
model corresponding to the fit of the first model. This operation uses
the function fdHess
of the nlme package; the likelihood
must be evaluated several times, but many fewer times than would be
needed to fit the model. The score statistic is an approximation to the
likelihood ratio; this allows the difference in AIC to be estimated.
Covariates are inferred from components of the reference model
secr
. If the new models require additional covariates these may
usually be added to the respective component of secr
.
Mapping of parameters between the fitted and alternative models
sometimes requires user intervention via the betaindex
argument.
For example betaindex
= c(1,2,4) is the correct mapping when
comparing the null model (D\sim{~}
1, g0\sim{~}
1,
sigma\sim{~}
1) to one with a behavioural effect on g0
(D\sim{~}
1, g0\sim{~}
b, sigma\sim{~}
1).
The arguments .relStep
and minAbsPar
control the numerical
gradient calculation and are passed directly to
fdHess
. More investigation is needed to determine
optimal settings.
score.table
summarises one or more score tests in the form of a
model comparison table. The ... argument here allows the inclusion of
additional score test objects (note the meaning differs from
score.test
). Approximate AIC values are used to compute relative
AIC model weights for all models within dmax AIC units of the best
model.
If ncores = NULL
then the existing value from the environment variable
RCPP_PARALLEL_NUM_THREADS is used (see setNumThreads
).
An object of class ‘score.test’ that inherits from ‘htest’, a list with components
statistic |
the value the chi-squared test statistic (score statistic) |
parameter |
degrees of freedom of the approximate chi-squared distribution of the test statistic (difference in number of parameters H0, H1) |
p.value |
probability of test statistic assuming chi-square distribution |
method |
a character string indicating the type of test performed |
data.name |
character string with null hypothesis, alternative hypothesis and arguments to function call from fit of H0 |
H0 |
simpler model |
np0 |
number of parameters in simpler model |
H1 |
alternative model |
H1.beta |
coefficients of alternative model |
AIC |
Akaike's information criterion, approximated from score statistic |
AICc |
AIC with small-sample adjustment of Hurvich & Tsai 1989 |
If ... defines several alternative models then a list of score.test objects is returned.
The output from score.table
is a dataframe with one row per model, including the reference model.
This implementation is experimental. The AIC values, and values derived from them, are approximations that may differ considerably from AIC values obtained by fitting and comparing the respective models. Use of the observed information matrix may not be optimal.
Weights were based on AICc rather than AIC prior to version 5.0.0.
Catchpole, E. A. and Morgan, B. J. T. (1996) Model selection of ring-recovery models using score tests. Biometrics 52, 664–672.
Hurvich, C. M. and Tsai, C. L. (1989) Regression and time series model selection in small samples. Biometrika 76, 297–307.
McCrea, R. S. and Morgan, B. J. T. (2011) Multistate mark-recapture model selection using score tests. Biometrics 67, 234–241.
Morgan, B. J. T., Palmer, K. J. and Ridout, M. S. (2007) Negative score test statistic. American statistician 61, 285–288.
AIC
,
LR.test
## Not run:
AIC (secrdemo.0, secrdemo.b)
st <- score.test (secrdemo.0, g0 ~ b)
st
score.table(st)
## adding a time covariate to separate occasions (1,2) from (3,4,5)
secrdemo.0$timecov <- data.frame(t2 = factor(c(1,1,2,2,2)))
st2 <- score.test (secrdemo.0, g0 ~ t2)
score.table(st,st2)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.