reliability | R Documentation |
For diagnostic classification models, reliability can be estimated at the pattern or attribute level. Pattern-level reliability represents the classification consistency and accuracy of placing students into an overall mastery profile. Rather than an overall profile, attributes can also be scored individually. In this case, classification consistency and accuracy should be evaluated for each individual attribute, rather than the overall profile. This is referred to as the maximum a posteriori (MAP) reliability. Finally, it may be desirable to report results as the probability of proficiency or mastery on each attribute instead of a proficient/not proficient classification. In this case, the reliability of the posterior probability should be reported. This is the expected a posteriori (EAP) reliability.
reliability(model, ...)
## S3 method for class 'measrdcm'
reliability(model, ..., threshold = 0.5, force = FALSE)
model |
The estimated model to be evaluated. |
... |
Unused. For future extensions. |
threshold |
For |
force |
If reliability information has already been added to the model
object with |
The pattern-level reliability (pattern_reliability
) statistics are
described in Cui et al. (2012). Attribute-level classification reliability
statistics (map_reliability
) are described in Johnson & Sinharay (2018).
Reliability statistics for the posterior mean of the skill indicators (i.e.,
the mastery or proficiency probabilities; eap_reliability
) are described in
Johnson & Sinharay (2019).
For class measrdcm
, a list with 3 elements:
pattern_reliability
: The pattern-level accuracy (p_a
) and consistency
(p_c
) described by Cui et al. (2012).
map_reliability
: A list with 2 elements: accuracy
and consistency
,
which include the attribute-level classification reliability statistics
described by Johnson & Sinharay (2018).
eap_reliability
: The attribute-level posterior probability reliability
statistics described by Johnson & Sinharay (2020).
reliability(measrdcm)
: Reliability measures for diagnostic classification
models.
Cui, Y., Gierl, M. J., & Chang, H.-H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49(1), 19-38. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/j.1745-3984.2011.00158.x")}
Johnson, M. S., & Sinharay, S. (2018). Measures of agreement to assess attribute-level classification accuracy and consistency for cognitive diagnostic assessments. Journal of Educational Measurement, 55(4), 635-664. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12196")}
Johnson, M. S., & Sinharay, S. (2020). The reliability of the posterior probability of skill attainment in diagnostic classification models. Journal of Educational and Behavioral Statistics, 45(1), 5-31. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.3102/1076998619864550")}
rstn_mdm_lcdm <- measr_dcm(
data = mdm_data, missing = NA, qmatrix = mdm_qmatrix,
resp_id = "respondent", item_id = "item", type = "lcdm",
method = "optim", seed = 63277, backend = "rstan"
)
reliability(rstn_mdm_lcdm)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.