| measr_extract | R Documentation |
measrfit objectExtract model metadata, parameter estimates, and model evaluation results.
measr_extract(model, what, ...)
model |
The estimated to extract information from. |
what |
Character string. The information to be extracted. See details for available options. |
... |
Additional arguments passed to each extract method.
|
For diagnostic classification models, we can extract the following information:
prior: The priors used when estimating the model.
classes: The possible classes or profile patterns. This will show the
class label (i.e., the pattern of proficiency) and the attributes
included in each class.
item_param: The estimated item parameters. This shows the name of the
parameter, the class of the parameter, and the estimated value.
strc_param: The estimated structural parameters. This is the base rate
of membership in each class. This shows the class pattern, the attributes
present in each class, and the estimated proportion of respondents in
each class.
attribute_base_rate: The estimated base rate of attribute proficiency.
Calculated from the structural parameters of the classes where each
attribute is present.
pi_matrix: The model estimated probability that a respondent in the
given class provides a correct response to the item. The output shows the
the item (rows), class (columns), and estimated p-values.
exp_pvalues: Model expected p-values for each item. This is
equivalent to the pi_matrix, but also includes an "overall" field,
which represents the expected p-value for each item (i.e., an average
of the class-specific p-values, weighted by the prevalence of each
class).
class_prob: The probability that each respondent belongs to class
(i.e., has the given pattern of proficiency).
attribute_prob: The proficiency probability for each respondent and
attribute.
m2: The M2 fit statistic.
See fit_m2() for details.
rmsea: The root mean square error of approximation (RMSEA) fit
statistic and associated confidence interval. See fit_m2() for details.
srmsr: The standardized root mean square residual (SRMSR) fit
statistic. See fit_m2() for details.
ppmc_raw_score: The observed and posterior predicted chi-square
statistic for the raw score distribution. See fit_ppmc() for details.
ppmc_conditional_prob: The observed and posterior predicted conditional
probabilities of each class providing a correct response to each item.
See fit_ppmc() for details.
ppmc_conditional_prob_flags: A subset of the PPMC conditional
probabilities where the ppp is outside the specified ppmc_interval.
ppmc_odds_ratio: The observed and posterior predicted odds ratios of
each item pair. See fit_ppmc() for details.
ppmc_odds_ratio_flags: A subset of the PPMC odds ratios where the ppp
is outside the specified ppmc_interval.
ppmc_pvalue: The observed and posterior predicted proportion of correct
responses to each item. See fit_ppmc() for details.
ppmc_pvalue_flags: A subset of the PPMC proportion correct values where
the ppp is outside the specified ppmc_interval.
loo: The leave-one-out cross validation results. See loo::loo() for
details.
waic: The widely applicable information criterion results. See
loo::waic() for details.
aic: The Akaike information criterion results. See aic() for details.
bic: The Bayesian information criterion results. See bic() for
details.
pattern_reliability: The accuracy and consistency of the overall
attribute profile classification, as described by Cui et al. (2012).
classification_reliability: The classification accuracy and consistency
for each attribute, using the metrics described by Johnson & Sinharay
(2018).
probability_reliability: Reliability estimates for the probability of
proficiency on each attribute, as described by Johnson & Sinharay (2020).
The extracted information. The specific structure will vary depending on what is being extracted, but usually the returned object is a tibble with the requested information.
Cui, Y., Gierl, M. J., & Chang, H.-H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49(1), 19-38. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/j.1745-3984.2011.00158.x")}
Johnson, M. S., & Sinharay, S. (2018). Measures of agreement to assess attribute-level classification accuracy and consistency for cognitive diagnostic assessments. Journal of Educational Measurement, 55(4), 635-664. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12196")}
Johnson, M. S., & Sinharay, S. (2020). The reliability of the posterior probability of skill attainment in diagnostic classification models. Journal of Educational and Behavioral Statistics, 45(1), 5-31. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.3102/1076998619864550")}
Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30(2), 251-275. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/s00357-013-9129-4")}
rstn_mdm_lcdm <- dcm_estimate(
dcm_specify(dcmdata::mdm_qmatrix, identifier = "item"),
data = dcmdata::mdm_data,
missing = NA,
identifier = "respondent",
method = "optim",
seed = 63277,
backend = "rstan"
)
measr_extract(rstn_mdm_lcdm, "strc_param")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.