measr_extract | R Documentation |
measrfit
objectExtract components of a measrfit
object
Extract components of an estimated diagnostic classification model
measr_extract(model, ...)
## S3 method for class 'measrdcm'
measr_extract(model, what, ...)
model |
The estimated to extract information from. |
... |
Additional arguments passed to each extract method.
|
what |
Character string. The information to be extracted. See details for available options. |
For diagnostic classification models, we can extract the following information:
item_param
: The estimated item parameters. This shows the name of the
parameter, the class of the parameter, and the estimated value.
strc_param
: The estimated structural parameters. This is the base rate
of membership in each class. This shows the class pattern and the
estimated proportion of respondents in each class.
prior
: The priors used when estimating the model.
classes
: The possible classes or profile patterns. This will show the
class label (i.e., the pattern of proficiency) and the attributes
included in each class.
class_prob
: The probability that each respondent belongs to class
(i.e., has the given pattern of proficiency).
attribute_prob
: The proficiency probability for each respondent and
attribute.
m2
: The M2 fit statistic.
See fit_m2()
for details. Model fit information must first be added to
the model using add_fit()
.
rmsea
: The root mean square error of approximation (RMSEA) fit
statistic and associated confidence interval. See fit_m2()
for details.
Model fit information must first be added to the model using add_fit()
.
srmsr
: The standardized root mean square residual (SRMSR) fit
statistic. See fit_m2()
for details. Model fit information must first
be added to the model using add_fit()
.
ppmc_raw_score
: The observed and posterior predicted chi-square
statistic for the raw score distribution. See fit_ppmc()
for details.
Model fit information must first be added to the model using add_fit()
.
ppmc_conditional_prob
: The observed and posterior predicted conditional
probabilities of each class providing a correct response to each item.
See fit_ppmc()
for details.
Model fit information must first be added to the model using add_fit()
.
ppmc_conditional_prob_flags
: A subset of the PPMC conditional
probabilities where the ppp is outside the specified ppmc_interval
.
ppmc_odds_ratio
: The observed and posterior predicted odds ratios of
each item pair. See fit_ppmc()
for details.
Model fit information must first be added to the model using add_fit()
.
ppmc_odds_ratio_flags
: A subset of the PPMC odds ratios where the ppp
is outside the specified ppmc_interval
.
ppmc_pvalue
: The observed and posterior predicted proportion of correct
responses to each item. See fit_ppmc()
for details.
ppmc_pvalue_flags
: A subset of the PPMC proportion correct values where
the ppp is outside the specified ppmc_interval
.
loo
: The leave-one-out cross validation results. See loo::loo()
for
details. The information criterion must first be added to the model using
add_criterion()
.
waic
: The widely applicable information criterion results. See
loo::waic()
for details. The information criterion must first be added
to the model using add_criterion()
.
pattern_reliability
: The accuracy and consistency of the overall
attribute profile classification, as described by Cui et al. (2012).
Reliability information must first be added to the model using
add_reliability()
.
classification_reliability
: The classification accuracy and consistency
for each attribute, using the metrics described by Johnson & Sinharay
(2018). Reliability information must first be added to the model using
add_reliability()
.
probability_reliability
: Reliability estimates for the probability of
proficiency on each attribute, as described by Johnson & Sinharay (2020).
Reliability information must first be added to the model using
add_reliability()
.
The extracted information. The specific structure will vary depending on what is being extracted, but usually the returned object is a tibble with the requested information.
measr_extract(measrdcm)
: Extract components of an estimated diagnostic
classification model.
Cui, Y., Gierl, M. J., & Chang, H.-H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49(1), 19-38. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/j.1745-3984.2011.00158.x")}
Johnson, M. S., & Sinharay, S. (2018). Measures of agreement to assess attribute-level classification accuracy and consistency for cognitive diagnostic assessments. Journal of Educational Measurement, 55(4), 635-664. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12196")}
Johnson, M. S., & Sinharay, S. (2020). The reliability of the posterior probability of skill attainment in diagnostic classification models. Journal of Educational and Behavioral Statistics, 45(1), 5-31. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.3102/1076998619864550")}
Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30(2), 251-275. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/s00357-013-9129-4")}
rstn_mdm_lcdm <- measr_dcm(
data = mdm_data, missing = NA, qmatrix = mdm_qmatrix,
resp_id = "respondent", item_id = "item", type = "lcdm",
method = "optim", seed = 63277, backend = "rstan"
)
measr_extract(rstn_mdm_lcdm, "strc_param")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.