Description Usage Arguments Details Value Author(s) References See Also Examples
Fits the betabinomial model and the chancecorrected betabinomial model to (overdispersed) binomial data.
1 2 3 4 5 6 7 
data 
matrix or data.frame with two columns; first column contains the number of success and the second the total number of cases. The number of rows should correspond to the number of observations. 
start 
starting values to be used in the optimization 
vcov 
logical, should the variancecovariance matrix of the parameters be computed? 
method 
the sensory discrimination protocol for which dprime and its standard error should be computed 
corrected 
should the chance corrected or the standard beta binomial model be estimated? 
gradTol 
a warning is issued if maxgradient < gradTol, where 'gradient' is the gradient at the values at which the optimizer terminates. This is not used as a termination or convergence criterion during model fitting. 
object 
an object of class "betabin", i.e. the result of

level 
the confidence level of the confidence intervals computed by the summary method 
... 

The betabinomial models are parameterized in terms of mu and gamma, where mu corresponds to a probability parameter and gamma measures overdispersion. Both parameters are restricted to the interval (0, 1). The parameters of the standard (i.e. corrected = FALSE) betabinomial model refers to the mean (i.e. probability) and dispersion on the scale of the observations, i.e. on the scale where we talk of a probability of a correct answer (Pc). The parameters of the chance corrected (i.e. corrected = TRUE) betabinomial model refers to the mean and dispersion on the scale of the "probability of discrimination" (Pd). The mean parameter (mu) is therefore restricted to the interval from zero to one in both models, but they have different interpretations.
The summary method use the estimate of mu to infer the parameters of the sensory experiment; Pc, Pd and dprime. These are restricted to their allowed ranges, e.g. Pc is always at least as large as the guessing probability.
Confidens intervals are computed as Wald (normalbased) intervals on the muscale and the confidence limits are subsequently transformed to the Pc, Pd and dprime scales. Confidence limits are restricted to the allowed ranges of the parameters, for example no confidence limits will be less than zero.
Standard errors, and therefore also confidence intervals, are only available if the parameters are not at the boundary of their allowed range (parameter space). If parameters are close to the boundaries of their allowed range, standard errors, and also confidence intervals, may be misleading. The likelihood ratio tests are more accurate. More accurate confidence intervals such as profile likelihood intervals may be implemented in the future.
The summary method provides a likelihood ratio test of overdispersion on one degree of freedom and a likelihood ratio test of association (i.e. where the null hypothesis is "no difference" and the alternative hypothesis is "any difference") on two degrees of freedom (chisquare tests). Since the gamma parameter is tested on the boundary of the parameter space, the correct degree of freedom for the first test is probably 1/2 rather than one, or somewhere in between, and the latter test is probably also on less than two degrees of freedom. Research is needed to determine the appropriate no. degrees of freedom to use in each case. The choices used here are believed to be conservative, so the stated pvalues are probably a little too large.
The loglikelihood of the standard betabinomial model is
\ell(α, β; x, n) = ∑_{j=1}^N ≤ft\{ \log {n_j \choose x_j}  \log Beta(α, β) + \log Beta(α + x_j, β  x_j + n_j) \right\}
and the loglikelihood of the chance corrected betabinomial model is
\ell(α, β; x, n) = ∑_{j=1}^N ≤ft\{ C + \log ≤ft[ ∑_{i=0}^{x_j} {{x_j} \choose i} (1p_g)^{n_jx_j+i} p_g^{x_ji} Beta(α + i, n_j  x_j + β) \right] \right\}
where
C = \log {n_j \choose x_j}  \log Beta(α, β)
and where μ = α/(α + β),
γ = 1/(α + β + 1), Beta is the Beta
function, cf. beta
,
N is the number of independent binomial observations, i.e.~the
number of rows in data
, and p_g is the guessing
probability, pGuess
.
The variancecovariance matrix (and standard errors) is based on the
inverted Hessian at the optimum. The Hessian is obtained with the
hessian
function from the numDeriv package.
The gradient at the optimum is evaluated with gradient
from the
numDeriv package.
The bounded optimization is performed with the "LBFGSB" optimizer in
optim
.
The following additional methods are implemented objects of class
betabin
: print
, vcov
and logLik
.
An object of class betabin
with elements
coefficients 
named vector of coefficients 
vcov 
variancecovariance matrix of the parameter estimates if

data 
the data supplied to the function 
call 
the matched call 
logLik 
the value of the loglikelihood at the MLEs 
method 
the method used for the fit 
convergence 
0 indicates convergence. For other error messages,
see 
message 
possible error message  see 
counts 
the number of iterations used in the optimization  see

corrected 
is the chance corrected model estimated? 
logLikNull 
loglikelihood of the binomial model with prop = pGuess 
logLikMu 
loglikelihood of a binomial model with prop = sum(x)/sum(n) 
Rune Haubo B Christensen
Brockhoff, P.B. (2003). The statistical power of replications in difference tests. Food Quality and Preference, 14, pp. 405–417.
triangle
, twoAFC
,
threeAFC
, duotrio
, tetrad
twofive
, twofiveF
, hexad
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  ## Create data:
x < c(3,2,6,8,3,4,6,0,9,9,0,2,1,2,8,9,5,7)
n < c(10,9,8,9,8,6,9,10,10,10,9,9,10,10,10,10,9,10)
dat < data.frame(x, n)
## Chance corrected betabinomial model:
(bb0 < betabin(dat, method = "duotrio"))
summary(bb0)
## Uncorrected betabinomial model:
(bb < betabin(dat, corrected = FALSE, method = "duotrio"))
summary(bb)
vcov(bb)
logLik(bb)
AIC(bb)
coef(bb)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.