Description Usage Arguments Value References Examples
Plotting tool for gauging the classification accuracy or consistency of classifications based on IRT ability estimates, where classification accuracy and consistency indices are computed by means of the cacIRT
package (Lathrop, 2015).
Classification accuracy (CA) refers to the rate of correct classification based on observed performances. Classification consistency (CC) refers to the rate at which examinees are placed in the same category across administrations of equivalent tests, regardless of whether the classification is accurate. Both of these values can range between 1 (perfect accuracy / consistency) and .5 (perfectly inaccurate / inconsistent; Lathrop and Cheng, 2014).
The expected classification accuracy and consistency indices are those of the "Rudner approach" (Rudner, 2001, 2005). The expected classification accuracy index gives the probability of an examinee with a given actual-score attaining a specific observed score within some given interval (e.g., above or below some given cutoff score) on the actual-score scale.
1 2 3 4 5 |
x |
A data.frame or matrix with rows representing respondents and columns representing items, or a mirt-model object of class "SingleGroupClass". |
ablty |
A vector of ability estimates. Requires specification of standard error in |
ablty.se |
A vector of standard errors of estimates corresponding to the values in the |
pop.dist |
Population distribution. Specifies the mean and standard deviation of the population distribution along which individual Theta values are plotted. Default is c(0, 1), reflecting the Standard Normal distribution. |
cutoff |
A single value or a vector of values defining the cutoff(s) relative to which expected classification consistency or accuracy for observations are to be calculated and illustrated. Default is 0. |
stat |
A character-value indicating whether to color-code observations with respect to their expected consistency or accuracy. Permissible values are "c", "cc", "consistency" or "Consistency" for expected classification consistency, and "a", "ca", "accuracy" or "Accuracy" for expected classification accuracy. Default is "ca". |
ci |
Logical. Plot confidence intervals around each observation point? Default is TRUE. |
cSEM |
Logical. Plot the conditional standard errors of the estimates? Default is FALSE. |
xRng |
The range of the plotted x-axis. Default is c(-3, 3). |
yRng |
The range of the plotted y-axis. Default is c(0, .5). |
grid |
Logical. Include a grid in the plot? Default is TRUE. |
lbls |
Logical. Include labels in the plot? Default is TRUE. |
rel.wdth |
The relative widths of the main plot and the color gradient legend. Default is c(7, 1). |
mdl |
If a dataset was supplied as input, specifies which model to fit to the data by way of the |
ablty.est |
A character value specifying which estimator to use for estimating ability from data. Default is maximum likelihood ("ML"). See |
colorblindFriendly |
Logical. Make gradient color-blind friendly? Default is FALSE. |
A graph plotting observations with color gradients indicating expected classification consistency and accuracy relative to a defined cutoff point.
R. Philip Chalmers (2012). mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical Software, 48(6), 1-29.
Quinn N. Lathrop (2015). cacIRT: Classification Accuracy and Consistency under Item Response Theory. R package version 1.4.
Lawrence M. Rudner (2001). Computing the Expected Proportions of Misclassified Examinees. Practical Assessment, Research & Evaluation., 7(14), 1-6.
Lawrence M. Rudner (2005). Expected Classification Accuracy. Practical Assessment, Research & Evaluation, 10(13), 1-5.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | # Color-blind friendly plotting of classification consistency based on feeding
# cacPlot a dataset, where the cutoff-point is set to 0.5.
data <- expand.table(LSAT7[2:31, ])
cacPlot(data, stat = "c", cutoff = .5, colorblindFriendly = TRUE)
# Plotting of classification accuracy based on feeding cacPlot a mirt
# model-object along with plotting of the conditional standard error
# of measurement (cSEM), using the default cutoff of 0 (i.e., above
# or below average).
data <- expand.table(LSAT7[2:31, ])
LSAT7.mod <- mirt(data, model = 1, itemtype = "Rasch")
cacPlot(LSAT7.mod, stat = "a", cSEM = TRUE)
# Plotting of classification consistency based on feeding cacPlot a
# vector of raw ability estimates, and a vector with the standard
# errors corresponding to those ability estimates.
ability_estimates <- fscores(mirt(expand.table(LSAT7[2:31, ]), 1, "Rasch"),
"ML", response.pattern = expand.table(LSAT7[2:31, ]))[, c("F1", "SE_F1")]
cacPlot(ablty = ability_estimates[, "F1"], ablty.se = ability_estimates[, "SE_F1"], stat = "c")
# Plotting of classification accuracy with several cutoff points
data <- expand.table(LSAT7[2:31, ])
LSAT7.mod <- mirt(data, model = 1, itemtype = "Rasch")
cacPlot(LSAT7.mod, stat = "a", cutoff = c(-.5, 0, .5))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.