View source: R/cdm.est.calc.accuracy.R
cdm.est.class.accuracy | R Documentation |
This function computes the classification accuracy and consistency originally proposed by Cui, Gierl and Chang (2012; see also Wang et al., 2015). The function computes both statistics by estimators of Johnson and Sinharay (2018; see also Sinharay & Johnson, 2019) and simulation based estimation.
cdm.est.class.accuracy(cdmobj, n.sims=0, version=2)
cdmobj |
Object of class |
n.sims |
Number of simulated persons. If |
version |
Correct classification reliability statistics can be obtained
using the default |
The item parameters and the probability distribution of latent classes is used as the basis of the simulation. Accuracy and consistency is estimated for both MLE and MAP classification estimators. In addition, classification accuracy measures are available for the separate classification of all skills.
A data frame for MLE, MAP and MAP (Skill 1, ..., Skill K
)
classification reliability for the whole latent class pattern and
marginal skill classification with following columns:
Pa_est |
Classification accuracy (Cui et al., 2012) using the estimator of Johnson and Sinharay, 2018 |
Pa_sim |
Classification accuracy based on simulated data
(only for |
Pc |
Classification consistency (Cui et al., 2012) using the estimator of Johnson and Sinharay, 2018 |
Pc_sim |
Classification consistency based on simulated data
(only for |
Cui, Y., Gierl, M. J., & Chang, H.-H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49, 19-38. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/j.1745-3984.2011.00158.x")}
Johnson, M. S., & Sinharay, S. (2018). Measures of agreement to assess attribute-level classification accuracy and consistency for cognitive diagnostic assessments. Journal of Educational Measurement, 45(4), 635-664. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12196")}
Sinharay, S., & Johnson, M. S. (2019). Measures of agreement: Reliability, classification accuracy, and classification consistency. In M. von Davier & Y.-S. Lee (Eds.). Handbook of diagnostic classification models (pp. 359-377). Cham: Springer. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/978-3-030-05584-4_17")}
Wang, W., Song, L., Chen, P., Meng, Y., & Ding, S. (2015). Attribute-level and pattern-level classification consistency and accuracy indices for cognitive diagnostic assessment. Journal of Educational Measurement, 52(4), 457-476. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12096")}
## Not run:
#############################################################################
# EXAMPLE 1: DINO data example
#############################################################################
data(sim.dino, package="CDM")
data(sim.qmatrix, package="CDM")
#***
# Model 1: estimate DINO model with din
mod1 <- CDM::din( sim.dino, q.matrix=sim.qmatrix, rule="DINO")
# estimate classification reliability
cdm.est.class.accuracy( mod1, n.sims=5000)
#***
# Model 2: estimate DINO model with gdina
mod2 <- CDM::gdina( sim.dino, q.matrix=sim.qmatrix, rule="DINO")
# estimate classification reliability
cdm.est.class.accuracy( mod2 )
m1 <- mod1$coef[, c("guess", "slip" ) ]
m2 <- mod2$coef
m2 <- cbind( m1, m2[ seq(1,18,2), "est" ],
1 - m2[ seq(1,18,2), "est" ] - m2[ seq(2,18,2), "est" ] )
colnames(m2) <- c("g.M1", "s.M1", "g.M2", "s.M2" )
## > round( m2, 3 )
## g.M1 s.M1 g.M2 s.M2
## Item1 0.109 0.192 0.109 0.191
## Item2 0.073 0.234 0.072 0.234
## Item3 0.139 0.238 0.146 0.238
## Item4 0.124 0.065 0.124 0.009
## Item5 0.125 0.035 0.125 0.037
## Item6 0.214 0.523 0.214 0.529
## Item7 0.193 0.514 0.192 0.514
## Item8 0.246 0.100 0.246 0.100
## Item9 0.201 0.032 0.195 0.032
# Note that s (the slipping parameter) substantially differs for Item4
# for DINO estimation in 'din' and 'gdina'
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.