lc.2raters | R Documentation |
This function computes a latent class model for ratings on an item based on exchangeable raters (Uebersax & Grove, 1990). Additionally, several measures of rater agreement are computed (see e.g. Gwet, 2010).
lc.2raters(data, conv=0.001, maxiter=1000, progress=TRUE)
## S3 method for class 'lc.2raters'
summary(object,...)
data |
Data frame with item responses (must be ordered from 0 to |
conv |
Convergence criterion |
maxiter |
Maximum number of iterations |
progress |
An optional logical indicating whether iteration progress should be displayed. |
object |
Object of class |
... |
Further arguments to be passed |
For two exchangeable raters which provide ratings on an item, a latent
class model with K+1
classes (if there are K+1
item categories
0,...,K
) is defined. Where P(X=x, Y=y | c)
denotes
the probability that the first rating is x
and the second rating is
y
given the true but unknown item category (class) c
. Ratings are
assumed to be locally independent, i.e.
P(X=x, Y=y | c )=P( X=x | c) \cdot P(Y=y | c )=p_{x|c} \cdot p_{y|c}
Note that P(X=x|c)=P(Y=x|c)=p_{x|c}
holds due to the exchangeability of raters.
The latent class model estimates true class proportions \pi_c
and
conditional item probabilities p_{x|c}
.
A list with following entries
classprob.1rater.like |
Classification probability |
classprob.1rater.post |
Classification probability |
classprob.2rater.like |
Classification probability |
classprob.2rater.post |
Classification probability |
f.yi.qk |
Likelihood of each pair of ratings |
f.qk.yi |
Posterior of each pair of ratings |
probs |
Item response probabilities |
pi.k |
Estimated class proportions |
pi.k.obs |
Observed manifest class proportions |
freq.long |
Frequency table of ratings in long format |
freq.table |
Symmetrized frequency table of ratings |
agree.stats |
Measures of rater agreement. These measures include
percentage agreement ( |
data |
Used dataset |
N.categ |
Number of categories |
Aickin, M. (1990). Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. Biometrics, 46, 293-302.
Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61, 29-48.
Gwet, K. L. (2010). Handbook of Inter-Rater Reliability. Advanced Analytics, Gaithersburg. http://www.agreestat.com/
Uebersax, J. S., & Grove, W. M. (1990). Latent class analysis of diagnostic agreement. Statistics in Medicine, 9, 559-572.
See also rm.facets
and rm.sdt
for
specifying rater models.
See also the irr package for measures of rater agreement.
#############################################################################
# EXAMPLE 1: Latent class models for rating datasets data.si05
#############################################################################
data(data.si05)
#*** Model 1: one item with two categories
mod1 <- sirt::lc.2raters( data.si05$Ex1)
summary(mod1)
#*** Model 2: one item with five categories
mod2 <- sirt::lc.2raters( data.si05$Ex2)
summary(mod2)
#*** Model 3: one item with eight categories
mod3 <- sirt::lc.2raters( data.si05$Ex3)
summary(mod3)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.