mirt | R Documentation |
mirt
fits a maximum likelihood (or maximum a posteriori) factor analysis model
to any mixture of dichotomous and polytomous data under the item response theory paradigm
using either Cai's (2010) Metropolis-Hastings Robbins-Monro (MHRM) algorithm, with
an EM algorithm approach outlined by Bock and Aitkin (1981) using rectangular or
quasi-Monte Carlo integration grids, or with the stochastic EM (i.e., the first two stages
of the MH-RM algorithm). Models containing 'explanatory' person or item level predictors
can only be included by using the mixedmirt
function, though latent
regression models can be fit using the formula
input in this function.
Tests that form a two-tier or bi-factor structure should be estimated with the
bfactor
function, which uses a dimension reduction EM algorithm for
modeling item parcels. Multiple group analyses (useful for DIF and DTF testing) are
also available using the multipleGroup
function.
mirt(
data,
model = 1,
itemtype = NULL,
guess = 0,
upper = 1,
SE = FALSE,
covdata = NULL,
formula = NULL,
SE.type = "Oakes",
method = "EM",
optimizer = NULL,
dentype = "Gaussian",
pars = NULL,
constrain = NULL,
calcNull = FALSE,
draws = 5000,
survey.weights = NULL,
quadpts = NULL,
TOL = NULL,
gpcm_mats = list(),
grsm.block = NULL,
rsm.block = NULL,
monopoly.k = 1L,
key = NULL,
large = FALSE,
GenRandomPars = FALSE,
accelerate = "Ramsay",
verbose = TRUE,
solnp_args = list(),
nloptr_args = list(),
spline_args = list(),
control = list(),
technical = list(),
...
)
data |
a |
model |
a string to be passed (or an object returned from) |
itemtype |
type of items to be modeled, declared as a vector for each item or a single value
which will be recycled for each item. The
Additionally, user defined item classes can also be defined using the |
guess |
fixed pseudo-guessing parameters. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector corresponding to each item |
upper |
fixed upper bound parameters for 4-PL model. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector corresponding to each item |
SE |
logical; estimate the standard errors by computing the parameter information matrix?
See |
covdata |
a data.frame of data used for latent regression models |
formula |
an R formula (or list of formulas) indicating how the latent traits
can be regressed using external covariates in |
SE.type |
type of estimation method to use for calculating the parameter information matrix
for computing standard errors and
Note that both the |
method |
a character object specifying the estimation algorithm to be used. The default is
The |
optimizer |
a character indicating which numerical optimizer to use. By default, the EM
algorithm will use the Other options include the Newton-Raphson ( Additionally, estimation subroutines from the |
dentype |
type of density form to use for the latent trait parameters. Current options include
Note that when |
pars |
a data.frame with the structure of how the starting values, parameter numbers,
estimation logical values, etc, are defined. The user may observe how the model defines the
values by using |
constrain |
a list of user declared equality constraints. To see how to define the
parameters correctly use |
calcNull |
logical; calculate the Null model for additional fit statistics (e.g., TLI)? Only applicable if the data contains no NA's and the data is not overly sparse |
draws |
the number of Monte Carlo draws to estimate the log-likelihood for the MH-RM algorithm. Default is 5000 |
survey.weights |
a optional numeric vector of survey weights to apply for each case in the
data (EM estimation only). If not specified, all cases are weighted equally (the standard IRT
approach). The sum of the |
quadpts |
number of quadrature points per dimension (must be larger than 2).
By default the number of quadrature uses the following scheme:
|
TOL |
convergence threshold for EM or MH-RM; defaults are .0001 and .001. If
|
gpcm_mats |
a list of matrices specifying how the scoring coefficients in the (generalized)
partial credit model should be constructed. If omitted, the standard gpcm format will be used
(i.e., |
grsm.block |
an optional numeric vector indicating where the blocking should occur when
using the grsm, NA represents items that do not belong to the grsm block (other items that may
be estimated in the test data). For example, to specify two blocks of 3 with a 2PL item for
the last item: |
rsm.block |
same as |
monopoly.k |
a vector of values (or a single value to repeated for each item) which indicate
the degree of the monotone polynomial fitted, where the monotone polynomial
corresponds to |
key |
a numeric vector of the response scoring key. Required when using nested logit item
types, and must be the same length as the number of items used. Items that are not nested logit
will ignore this vector, so use |
large |
a Alternatively, if the collapse table of frequencies is desired for the purpose of saving computations
(i.e., only computing the collapsed frequencies for the data onte-time) then a character vector can
be passed with the arguement
|
GenRandomPars |
logical; generate random starting values prior to optimization instead of using the fixed internal starting values? |
accelerate |
a character vector indicating the type of acceleration to use. Default
is |
verbose |
logical; print observed- (EM) or complete-data (MHRM) log-likelihood after each iteration cycle? Default is TRUE |
solnp_args |
a list of arguments to be passed to the |
nloptr_args |
a list of arguments to be passed to the |
spline_args |
a named list of lists containing information to be passed to the
This code input changes the |
control |
a list passed to the respective optimizers (i.e., |
technical |
a list containing lower level technical parameters for estimation. May be:
|
... |
additional arguments to be passed |
function returns an object of class SingleGroupClass
(SingleGroupClass-class)
Specification of the confirmatory item factor analysis model follows many of
the rules in the structural equation modeling framework for confirmatory factor analysis. The
variances of the latent factors are automatically fixed to 1 to help
facilitate model identification. All parameters may be fixed to constant
values or set equal to other parameters using the appropriate declarations.
Confirmatory models may also contain 'explanatory' person or item level predictors, though
including predictors is currently limited to the mixedmirt
function.
When specifying a single number greater than 1 as the model
input to mirt
an exploratory IRT model will be estimated. Rotation and target matrix options are available
if they are passed to generic functions such as summary-method
and
fscores
. Factor means and variances are fixed to ensure proper identification.
If the model is an exploratory item factor analysis estimation will begin
by computing a matrix of quasi-polychoric correlations. A
factor analysis with nfact
is then extracted and item parameters are
estimated by a_{ij} = f_{ij}/u_j
, where f_{ij}
is the factor
loading for the jth item on the ith factor, and u_j
is
the square root of the factor uniqueness, \sqrt{1 - h_j^2}
. The
initial intercept parameters are determined by calculating the inverse
normal of the item facility (i.e., item easiness), q_j
, to obtain
d_j = q_j / u_j
. A similar implementation is also used for obtaining
initial values for polytomous items.
Internally the g
and u
parameters are transformed using a logit
transformation (log(x/(1-x))
), and can be reversed by using 1 / (1 + exp(-x))
following convergence. This also applies when computing confidence intervals for these
parameters, and is done so automatically if coef(mod, rawug = FALSE)
.
As such, when applying prior distributions to these parameters it is recommended to use a prior
that ranges from negative infinity to positive infinity, such as the normally distributed
prior via the 'norm'
input (see mirt.model
).
Unrestricted full-information factor analysis is known to have problems with convergence, and some items may need to be constrained or removed entirely to allow for an acceptable solution. As a general rule dichotomous items with means greater than .95, or items that are only .05 greater than the guessing parameter, should be considered for removal from the analysis or treated with prior parameter distributions. The same type of reasoning is applicable when including upper bound parameters as well. For polytomous items, if categories are rarely endorsed then this will cause similar issues. Also, increasing the number of quadrature points per dimension, or using the quasi-Monte Carlo integration method, may help to stabilize the estimation process in higher dimensions. Finally, solutions that are not well defined also will have difficulty converging, and can indicate that the model has been misspecified (e.g., extracting too many dimensions).
For the MH-RM algorithm, when the number of iterations grows very high (e.g., greater than 1500)
or when Max Change = .2500
values are repeatedly printed
to the console too often (indicating that the parameters were being constrained since they are
naturally moving in steps greater than 0.25) then the model may either be ill defined or have a
very flat likelihood surface, and genuine maximum-likelihood parameter estimates may be difficult
to find. Standard errors are computed following the model convergence by passing
SE = TRUE
, to perform an addition MH-RM stage but treating the maximum-likelihood
estimates as fixed points.
Additional functions are available in the package which can be useful pre- and post-estimation. These are:
mirt.model
Define the IRT model specification use special syntax. Useful for defining between and within group parameter constraints, prior parameter distributions, and specifying the slope coefficients for each factor
coef-method
Extract raw coefficients from the model, along with their standard errors and confidence intervals
summary-method
Extract standardized loadings from model. Accepts a rotate
argument for exploratory
item response model
anova-method
Compare nested models using likelihood ratio statistics as well as information criteria such as the AIC and BIC
residuals-method
Compute pairwise residuals between each item using methods such as the LD statistic (Chen & Thissen, 1997), as well as response pattern residuals
plot-method
Plot various types of test level plots including the test score and information functions and more
itemplot
Plot various types of item level plots, including the score, standard error, and information functions, and more
createItem
Create a customized itemtype
that does not currently exist in the package
imputeMissing
Impute missing data given some computed Theta matrix
fscores
Find predicted scores for the latent traits using estimation methods such as EAP, MAP, ML, WLE, and EAPsum
wald
Compute Wald statistics follow the convergence of a model with a suitable information matrix
M2
Limited information goodness of fit test statistic based to determine how well the model fits the data
itemfit
and personfit
Goodness of fit statistics at the item and person levels, such as the S-X2, infit, outfit, and more
boot.mirt
Compute estimated parameter confidence intervals via the bootstrap methods
mirtCluster
Define a cluster for the package functions to use for capitalizing on multi-core architecture to utilize available CPUs when possible. Will help to decrease estimation times for tasks that can be run in parallel
The parameter labels use the follow convention, here using two factors and K
as the total
number of categories (using k
for specific category instances).
Only one intercept estimated, and the latent variance of \theta
is freely estimated. If
the data have more than two categories then a partial credit model is used instead (see
'gpcm' below).
P(x = 1|\theta, d) = \frac{1}{1 + exp(-(\theta + d))}
Depending on the model u
may be equal to 1 and g
may be equal to 0.
P(x = 1|\theta, \psi) = g + \frac{(u - g)}{
1 + exp(-(a_1 * \theta_1 + a_2 * \theta_2 + d))}
Currently restricted to unidimensional models
P(x = 1|\theta, \psi) = g + \frac{(u - g)}{
1 + exp(-(a_1 * \theta_1 + d))^S}
where S
allows for asymmetry in the response function and
is transformation constrained to be greater than 0 (i.e., log(S)
is estimated rather than S
)
Complementary log-log model (see Shim, Bonifay, and Wiedermann, 2022)
P(x = 1|\theta, b) = 1 - exp(-exp(\theta - b))
Currently restricted to unidimensional dichotomous data.
The graded model consists of sequential 2PL models,
P(x = k | \theta, \psi) = P(x \ge k | \theta, \phi) - P(x \ge k + 1 | \theta, \phi)
Note that P(x \ge 1 | \theta, \phi) = 1
while P(x \ge K + 1 | \theta, \phi) = 0
The unipolar log-logistic model (ULL; Lucke, 2015) is defined the same as the graded response model, however
P(x \le k | \theta, \psi) = \frac{\lambda_k\theta^\eta}{1 + \lambda_k\theta^\eta}
.
Internally the \lambda
parameters are exponentiated to keep them positive, and should
therefore the reported estimates should be interpreted in log units
A more constrained version of the graded model where graded spacing is equal across item
blocks and only adjusted by a single 'difficulty' parameter (c) while the latent variance
of \theta
is freely estimated (see Muraki, 1990 for this exact form).
This is restricted to unidimensional models only.
For the gpcm the d
values are treated as fixed and ordered values
from 0:(K-1)
(in the nominal model d_0
is also set to 0). Additionally, for
identification in the nominal model ak_0 = 0
, ak_{(K-1)} = (K - 1)
.
P(x = k | \theta, \psi) =
\frac{exp(ak_{k-1} * (a_1 * \theta_1 + a_2 * \theta_2) + d_{k-1})}
{\sum_{k=1}^K exp(ak_{k-1} * (a_1 * \theta_1 + a_2 * \theta_2) + d_{k-1})}
For the partial credit model (when itemtype = 'Rasch'
; unidimensional only) the above
model is further constrained so that ak = (0,1,\ldots, K-1)
, a_1 = 1
, and the
latent variance of \theta_1
is freely estimated. Alternatively, the partial credit model
can be obtained by containing all the slope parameters in the gpcms to be equal.
More specific scoring function may be included by passing a suitable list or matrices
to the gpcm_mats
input argument.
In the nominal model this parametrization helps to identify the empirical ordering of the
categories by inspecting the ak
values. Larger values indicate that the item category
is more positively related to the latent trait(s) being measured. For instance, if an item
was truly ordinal (such as a Likert scale), and had 4 response categories, we would expect
to see ak_0 < ak_1 < ak_2 < ak_3
following estimation. If on the other hand
ak_0 > ak_1
then it would appear that the second category is less related to to the
trait than the first, and therefore the second category should be understood as the
'lowest score'.
NOTE: The nominal model can become numerical unstable if poor choices for the high and low
values are chosen, resulting in ak
values greater than abs(10)
or more. It is
recommended to choose high and low anchors that cause the estimated parameters to fall
between 0 and K - 1
either by theoretical means or by re-estimating
the model with better values following convergence.
The gpcmIRT model is the classical generalized partial credit model for unidimensional response
data. It will obtain the same fit as the gpcm
presented above, however the parameterization
allows for the Rasch/generalized rating scale model as a special case.
E.g., for a K = 4 category response model,
P(x = 0 | \theta, \psi) = exp(0) / G
P(x = 1 | \theta, \psi) = exp(a(\theta - b1) + c) / G
P(x = 2 | \theta, \psi) = exp(a(2\theta - b1 - b2) + 2c) / G
P(x = 3 | \theta, \psi) = exp(a(3\theta - b1 - b2 - b3) + 3c) / G
where
G = exp(0) + exp(a(\theta - b1) + c) + exp(a(2\theta - b1 - b2) + 2c) +
exp(a(3\theta - b1 - b2 - b3) + 3c)
Here a
is the slope parameter, the b
parameters are the threshold
values for each adjacent category, and c
is the so-called difficulty parameter when
a rating scale model is fitted (otherwise, c = 0
and it drops out of the computations).
The gpcmIRT can be constrained to the partial credit IRT model by either constraining all the slopes to be equal, or setting the slopes to 1 and freeing the latent variance parameter.
Finally, the rsm is a more constrained version of the (generalized) partial credit model where the spacing is equal across item blocks and only adjusted by a single 'difficulty' parameter (c). Note that this is analogous to the relationship between the graded model and the grsm (with an additional constraint regarding the fixed discrimination parameters).
The multidimensional sequential response model has the form
P(x = k | \theta, \psi) = \prod (1 - F(a_1 \theta_1 + a_2 \theta_2 + d_{sk}))
F(a_1 \theta_1 + a_2 \theta_2 + d_{jk})
where F(\cdot)
is the cumulative logistic function.
The Tutz variant of this model (Tutz, 1990) (via itemtype = 'Tutz'
)
assumes that the slope terms are all equal to 1 and the latent
variance terms are estimated (i.e., is a Rasch variant).
The ideal point model has the form, with the upper bound constraint on d
set to 0:
P(x = 1 | \theta, \psi) = exp(-0.5 * (a_1 * \theta_1 + a_2 * \theta_2 + d)^2)
Partially compensatory models consist of the product of 2PL probability curves.
P(x = 1 | \theta, \psi) = g + (1 - g) (\frac{1}{1 + exp(-(a_1 * \theta_1 + d_1))} *
\frac{1}{1 + exp(-(a_2 * \theta_2 + d_2))})
Note that constraining the slopes to be equal across items will reduce the model to Embretson's (a.k.a. Whitely's) multicomponent model (1980).
Nested logistic curves for modeling distractor items. Requires a scoring key. The model is broken into two components for the probability of endorsement. For successful endorsement the probability trace is the 1-4PL model, while for unsuccessful endorsement:
P(x = 0 | \theta, \psi) =
(1 - P_{1-4PL}(x = 1 | \theta, \psi)) * P_{nominal}(x = k | \theta, \psi)
which is the product of the complement of the dichotomous trace line with the nominal
response model. In the nominal model, the slope parameters defined above are constrained
to be 1's, while the last value of the ak
is freely estimated.
The (multidimensional) generalized graded unfolding model is a class of ideal point models useful for ordinal response data. The form is
P(z=k|\theta,\psi)=\frac{exp\left[\left(z\sqrt{\sum_{d=1}^{D}
a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+\sum_{k=0}^{z}\psi_{ik}\right]+
exp\left[\left((M-z)\sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
\sum_{k=0}^{z}\psi_{ik}\right]}{\sum_{w=0}^{C}\left(exp\left[\left(w
\sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
\sum_{k=0}^{z}\psi_{ik}\right]+exp\left[\left((M-w)
\sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
\sum_{k=0}^{z}\psi_{ik}\right]\right)}
where \theta_{jd}
is the location of the j
th individual on the d
th dimension,
b_{id}
is the difficulty location of the i
th item on the d
th dimension,
a_{id}
is the discrimination of the j
th individual on the d
th dimension
(where the discrimination values are constrained to be positive),
\psi_{ik}
is the k
th subjective response category threshold for the i
th item,
assumed to be symmetric about the item and constant across dimensions, where
\psi_{ik} = \sum_{d=1}^D a_{id} t_{ik}
z = 1,2,\ldots, C
(where C
is the number of categories minus 1),
and M = 2C + 1
.
Spline response models attempt to model the response curves uses non-linear and potentially non-monotonic patterns. The form is
P(x = 1|\theta, \eta) = \frac{1}{1 + exp(-(\eta_1 * X_1 + \eta_2 * X_2 + \cdots + \eta_n * X_n))}
where the X_n
are from the spline design matrix X
organized from the grid of \theta
values. B-splines with a natural or polynomial basis are supported, and the intercept
input is
set to TRUE
by default.
Monotone polynomial model for polytomous response data of the form
P(x = k | \theta, \psi) =
\frac{exp(\sum_1^k (m^*(\psi) + \xi_{c-1})}
{\sum_1^C exp(\sum_1^K (m^*(\psi) + \xi_{c-1}))}
where m^*(\psi)
is the monotone polynomial function without the intercept.
To access examples, vignettes, and exercise files that have been generated with knitr please visit https://github.com/philchalmers/mirt/wiki.
Phil Chalmers rphilip.chalmers@gmail.com
Andrich, D. (1978). A rating scale formulation for ordered response categories. Psychometrika, 43, 561-573.
Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443-459.
Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-Information Item Factor Analysis. Applied Psychological Measurement, 12(3), 261-280.
Bock, R. D. & Lieberman, M. (1970). Fitting a response model for n dichotomously scored items. Psychometrika, 35, 179-197.
Cai, L. (2010a). High-Dimensional exploratory item factor analysis by a Metropolis-Hastings Robbins-Monro algorithm. Psychometrika, 75, 33-57.
Cai, L. (2010b). Metropolis-Hastings Robbins-Monro algorithm for confirmatory item factor analysis. Journal of Educational and Behavioral Statistics, 35, 307-335.
Chalmers, R., P. (2012). mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical Software, 48(6), 1-29. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v048.i06")}
Chalmers, R. P. (2015). Extended Mixed-Effects Item Response Models with the MH-RM Algorithm. Journal of Educational Measurement, 52, 200-222. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1111/jedm.12072")}
Chalmers, R. P. (2018). Numerical Approximation of the Observed Information Matrix with Oakes' Identity. British Journal of Mathematical and Statistical Psychology DOI: 10.1111/bmsp.12127
Chalmers, R., P. & Flora, D. (2014). Maximum-likelihood Estimation of Noncompensatory IRT Models with the MH-RM Algorithm. Applied Psychological Measurement, 38, 339-358. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1177/0146621614520958")}
Chen, W. H. & Thissen, D. (1997). Local dependence indices for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22, 265-289.
Falk, C. F. & Cai, L. (2016). Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis. Psychometrika, 81, 434-460.
Lord, F. M. & Novick, M. R. (1968). Statistical theory of mental test scores. Addison-Wesley.
Lucke, J. F. (2015). Unipolar item response models. In S. P. Reise & D. A. Revicki (Eds.), Handbook of item response theory modeling: Applications to typical performance assessment (pp. 272-284). New York, NY: Routledge/Taylor & Francis Group.
Ramsay, J. O. (1975). Solving implicit equations in psychometric data analysis. Psychometrika, 40, 337-360.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Danish Institute for Educational Research.
Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2000). A General Item Response Theory Model for Unfolding Unidimensional Polytomous Responses. Applied Psychological Measurement, 24, 3-32.
Shim, H., Bonifay, W., & Wiedermann, W. (2022). Parsimonious asymmetric item response theory modeling with the complementary log-log link. Behavior Research Methods, 55, 200-219.
Maydeu-Olivares, A., Hernandez, A. & McDonald, R. P. (2006). A Multidimensional Ideal Point Item Response Theory Model for Binary Data. Multivariate Behavioral Research, 41, 445-471.
Muraki, E. (1990). Fitting a polytomous item response model to Likert-type data. Applied Psychological Measurement, 14, 59-71.
Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159-176.
Muraki, E. & Carlson, E. B. (1995). Full-information factor analysis for polytomous item responses. Applied Psychological Measurement, 19, 73-90.
Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monographs, 34.
Suh, Y. & Bolt, D. (2010). Nested logit models for multiple-choice item response data. Psychometrika, 75, 454-473.
Sympson, J. B. (1977). A model for testing with multidimensional items. Proceedings of the 1977 Computerized Adaptive Testing Conference.
Thissen, D. (1982). Marginal maximum likelihood estimation for the one-parameter logistic model. Psychometrika, 47, 175-186.
Tutz, G. (1990). Sequential item response models with ordered response. British Journal of Mathematical and Statistical Psychology, 43, 39-55.
Varadhan, R. & Roland, C. (2008). Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm. Scandinavian Journal of Statistics, 35, 335-353.
Whitely, S. E. (1980). Multicomponent latent trait models for ability tests. Psychometrika, 45(4), 470-494.
Wood, R., Wilson, D. T., Gibbons, R. D., Schilling, S. G., Muraki, E., & Bock, R. D. (2003). TESTFACT 4 for Windows: Test Scoring, Item Statistics, and Full-information Item Factor Analysis [Computer software]. Lincolnwood, IL: Scientific Software International.
Woods, C. M., and Lin, N. (2009). Item Response Theory With Estimation of the Latent Density Using Davidian Curves. Applied Psychological Measurement,33(2), 102-117.
bfactor
, multipleGroup
, mixedmirt
,
expand.table
, key2binary
, mod2values
,
extract.item
, iteminfo
, testinfo
,
probtrace
, simdata
, averageMI
,
fixef
, extract.mirt
, itemstats
# load LSAT section 7 data and compute 1 and 2 factor models
data <- expand.table(LSAT7)
itemstats(data)
(mod1 <- mirt(data, 1))
coef(mod1)
summary(mod1)
plot(mod1)
plot(mod1, type = 'trace')
## Not run:
(mod2 <- mirt(data, 1, SE = TRUE)) #standard errors via the Oakes method
(mod2 <- mirt(data, 1, SE = TRUE, SE.type = 'SEM')) #standard errors with SEM method
coef(mod2)
(mod3 <- mirt(data, 1, SE = TRUE, SE.type = 'Richardson')) #with numerical Richardson method
residuals(mod1)
plot(mod1) #test score function
plot(mod1, type = 'trace') #trace lines
plot(mod2, type = 'info') #test information
plot(mod2, MI=200) #expected total score with 95% confidence intervals
# estimated 3PL model for item 5 only
(mod1.3PL <- mirt(data, 1, itemtype = c('2PL', '2PL', '2PL', '2PL', '3PL')))
coef(mod1.3PL)
# internally g and u pars are stored as logits, so usually a good idea to include normal prior
# to help stabilize the parameters. For a value around .182 use a mean
# of -1.5 (since 1 / (1 + exp(-(-1.5))) == .182)
model <- 'F = 1-5
PRIOR = (5, g, norm, -1.5, 3)'
mod1.3PL.norm <- mirt(data, model, itemtype = c('2PL', '2PL', '2PL', '2PL', '3PL'))
coef(mod1.3PL.norm)
#limited information fit statistics
M2(mod1.3PL.norm)
# unidimensional ideal point model
idealpt <- mirt(data, 1, itemtype = 'ideal')
plot(idealpt, type = 'trace', facet_items = TRUE)
plot(idealpt, type = 'trace', facet_items = FALSE)
# two factors (exploratory)
mod2 <- mirt(data, 2)
coef(mod2)
summary(mod2, rotate = 'oblimin') #oblimin rotation
residuals(mod2)
plot(mod2)
plot(mod2, rotate = 'oblimin')
anova(mod1, mod2) #compare the two models
scoresfull <- fscores(mod2) #factor scores for each response pattern
head(scoresfull)
scorestable <- fscores(mod2, full.scores = FALSE) #save factor score table
head(scorestable)
# confirmatory (as an example, model is not identified since you need 3 items per factor)
# Two ways to define a confirmatory model: with mirt.model, or with a string
# these model definitions are equivalent
cmodel <- mirt.model('
F1 = 1,4,5
F2 = 2,3')
cmodel2 <- 'F1 = 1,4,5
F2 = 2,3'
cmod <- mirt(data, cmodel)
# cmod <- mirt(data, cmodel2) # same as above
coef(cmod)
anova(cmod, mod2)
# check if identified by computing information matrix
(cmod <- mirt(data, cmodel, SE = TRUE))
###########
# data from the 'ltm' package in numeric format
itemstats(Science)
pmod1 <- mirt(Science, 1)
plot(pmod1)
plot(pmod1, type = 'trace')
plot(pmod1, type = 'itemscore')
summary(pmod1)
# Constrain all slopes to be equal with the constrain = list() input or mirt.model() syntax
# first obtain parameter index
values <- mirt(Science,1, pars = 'values')
values #note that slopes are numbered 1,5,9,13, or index with values$parnum[values$name == 'a1']
(pmod1_equalslopes <- mirt(Science, 1, constrain = list(c(1,5,9,13))))
coef(pmod1_equalslopes)
# using mirt.model syntax, constrain all item slopes to be equal
model <- 'F = 1-4
CONSTRAIN = (1-4, a1)'
(pmod1_equalslopes <- mirt(Science, model))
coef(pmod1_equalslopes)
coef(pmod1_equalslopes)
anova(pmod1_equalslopes, pmod1) #significantly worse fit with almost all criteria
pmod2 <- mirt(Science, 2)
summary(pmod2)
plot(pmod2, rotate = 'oblimin')
itemplot(pmod2, 1, rotate = 'oblimin')
anova(pmod1, pmod2)
# unidimensional fit with a generalized partial credit and nominal model
(gpcmod <- mirt(Science, 1, 'gpcm'))
coef(gpcmod)
# for the nominal model the lowest and highest categories are assumed to be the
# theoretically lowest and highest categories that related to the latent trait(s)
(nomod <- mirt(Science, 1, 'nominal'))
coef(nomod) #ordering of ak values suggest that the items are indeed ordinal
anova(gpcmod, nomod)
itemplot(nomod, 3)
# generalized graded unfolding model
(ggum <- mirt(Science, 1, 'ggum'))
coef(ggum, simplify=TRUE)
plot(ggum)
plot(ggum, type = 'trace')
plot(ggum, type = 'itemscore')
# monotonic polyomial models
(monopoly <- mirt(Science, 1, 'monopoly'))
coef(monopoly, simplify=TRUE)
plot(monopoly)
plot(monopoly, type = 'trace')
plot(monopoly, type = 'itemscore')
# unipolar IRT model
unimod <- mirt(Science, itemtype = 'ULL')
coef(unimod, simplify=TRUE)
plot(unimod)
plot(unimod, type = 'trace')
itemplot(unimod, 1)
# following use the correct log-normal density for latent trait
itemfit(unimod)
M2(unimod, type = 'C2')
fs <- fscores(unimod)
hist(fs, 20)
fscores(unimod, method = 'EAPsum', full.scores = FALSE)
## example applying survey weights.
# weight the first half of the cases to be more representative of population
survey.weights <- c(rep(2, nrow(Science)/2), rep(1, nrow(Science)/2))
survey.weights <- survey.weights/sum(survey.weights) * nrow(Science)
unweighted <- mirt(Science, 1)
weighted <- mirt(Science, 1, survey.weights=survey.weights)
###########
# empirical dimensionality testing that includes 'guessing'
data(SAT12)
data <- key2binary(SAT12,
key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
itemstats(data)
mod1 <- mirt(data, 1)
extract.mirt(mod1, 'time') #time elapsed for each estimation component
# optionally use Newton-Raphson for (generally) faster convergence in the M-step's
mod1 <- mirt(data, 1, optimizer = 'NR')
extract.mirt(mod1, 'time')
mod2 <- mirt(data, 2, optimizer = 'NR')
# difficulty converging with reduced quadpts, reduce TOL
mod3 <- mirt(data, 3, TOL = .001, optimizer = 'NR')
anova(mod1,mod2)
anova(mod2, mod3) #negative AIC, 2 factors probably best
# same as above, but using the QMCEM method for generally better accuracy in mod3
mod3 <- mirt(data, 3, method = 'QMCEM', TOL = .001, optimizer = 'NR')
anova(mod2, mod3)
# with fixed guessing parameters
mod1g <- mirt(data, 1, guess = .1)
coef(mod1g)
###########
# graded rating scale example
# make some data
set.seed(1234)
a <- matrix(rep(1, 10))
d <- matrix(c(1,0.5,-.5,-1), 10, 4, byrow = TRUE)
c <- seq(-1, 1, length.out=10)
data <- simdata(a, d + c, 2000, itemtype = rep('graded',10))
itemstats(data)
mod1 <- mirt(data, 1)
mod2 <- mirt(data, 1, itemtype = 'grsm')
coef(mod2)
anova(mod2, mod1) #not sig, mod2 should be preferred
itemplot(mod2, 1)
itemplot(mod2, 5)
itemplot(mod2, 10)
###########
# 2PL nominal response model example (Suh and Bolt, 2010)
data(SAT12)
SAT12[SAT12 == 8] <- NA #set 8 as a missing value
head(SAT12)
# correct answer key
key <- c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5)
scoredSAT12 <- key2binary(SAT12, key)
mod0 <- mirt(scoredSAT12, 1)
# for first 5 items use 2PLNRM and nominal
scoredSAT12[,1:5] <- as.matrix(SAT12[,1:5])
mod1 <- mirt(scoredSAT12, 1, c(rep('nominal',5),rep('2PL', 27)))
mod2 <- mirt(scoredSAT12, 1, c(rep('2PLNRM',5),rep('2PL', 27)), key=key)
coef(mod0)$Item.1
coef(mod1)$Item.1
coef(mod2)$Item.1
itemplot(mod0, 1)
itemplot(mod1, 1)
itemplot(mod2, 1)
# compare added information from distractors
Theta <- matrix(seq(-4,4,.01))
par(mfrow = c(2,3))
for(i in 1:5){
info <- iteminfo(extract.item(mod0,i), Theta)
info2 <- iteminfo(extract.item(mod2,i), Theta)
plot(Theta, info2, type = 'l', main = paste('Information for item', i), ylab = 'Information')
lines(Theta, info, col = 'red')
}
par(mfrow = c(1,1))
# test information
plot(Theta, testinfo(mod2, Theta), type = 'l', main = 'Test information', ylab = 'Information')
lines(Theta, testinfo(mod0, Theta), col = 'red')
###########
# using the MH-RM algorithm
data(LSAT7)
fulldata <- expand.table(LSAT7)
(mod1 <- mirt(fulldata, 1, method = 'MHRM'))
# Confirmatory models
# simulate data
a <- matrix(c(
1.5,NA,
0.5,NA,
1.0,NA,
1.0,0.5,
NA,1.5,
NA,0.5,
NA,1.0,
NA,1.0),ncol=2,byrow=TRUE)
d <- matrix(c(
-1.0,NA,NA,
-1.5,NA,NA,
1.5,NA,NA,
0.0,NA,NA,
3.0,2.0,-0.5,
2.5,1.0,-1,
2.0,0.0,NA,
1.0,NA,NA),ncol=3,byrow=TRUE)
sigma <- diag(2)
sigma[1,2] <- sigma[2,1] <- .4
items <- c(rep('2PL',4), rep('graded',3), '2PL')
dataset <- simdata(a,d,2000,items,sigma)
# analyses
# CIFA for 2 factor crossed structure
model.1 <- '
F1 = 1-4
F2 = 4-8
COV = F1*F2'
# compute model, and use parallel computation of the log-likelihood
if(interactive()) mirtCluster()
mod1 <- mirt(dataset, model.1, method = 'MHRM')
coef(mod1)
summary(mod1)
residuals(mod1)
#####
# bifactor
model.3 <- '
G = 1-8
F1 = 1-4
F2 = 5-8'
mod3 <- mirt(dataset,model.3, method = 'MHRM')
coef(mod3)
summary(mod3)
residuals(mod3)
anova(mod1,mod3)
#####
# polynomial/combinations
data(SAT12)
data <- key2binary(SAT12,
key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
model.quad <- '
F1 = 1-32
(F1*F1) = 1-32'
model.combo <- '
F1 = 1-16
F2 = 17-32
(F1*F2) = 1-8'
(mod.quad <- mirt(data, model.quad))
summary(mod.quad)
(mod.combo <- mirt(data, model.combo))
anova(mod.combo, mod.quad)
# non-linear item and test plots
plot(mod.quad)
plot(mod.combo, type = 'SE')
itemplot(mod.quad, 1, type = 'score')
itemplot(mod.combo, 2, type = 'score')
itemplot(mod.combo, 2, type = 'infocontour')
## empirical histogram examples (normal, skew and bimodality)
# make some data
set.seed(1234)
a <- matrix(rlnorm(50, .2, .2))
d <- matrix(rnorm(50))
ThetaNormal <- matrix(rnorm(2000))
ThetaBimodal <- scale(matrix(c(rnorm(1000, -2), rnorm(1000,2)))) #bimodal
ThetaSkew <- scale(matrix(rchisq(2000, 3))) #positive skew
datNormal <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaNormal)
datBimodal <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaBimodal)
datSkew <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaSkew)
normal <- mirt(datNormal, 1, dentype = "empiricalhist")
plot(normal, type = 'empiricalhist')
histogram(ThetaNormal, breaks=30)
bimodal <- mirt(datBimodal, 1, dentype = "empiricalhist")
plot(bimodal, type = 'empiricalhist')
histogram(ThetaBimodal, breaks=30)
skew <- mirt(datSkew, 1, dentype = "empiricalhist")
plot(skew, type = 'empiricalhist')
histogram(ThetaSkew, breaks=30)
#####
# non-linear parameter constraints with Rsolnp package (nloptr supported as well):
# Find Rasch model subject to the constraint that the intercepts sum to 0
dat <- expand.table(LSAT6)
itemstats(dat)
# free latent mean and variance terms
model <- 'Theta = 1-5
MEAN = Theta
COV = Theta*Theta'
# view how vector of parameters is organized internally
sv <- mirt(dat, model, itemtype = 'Rasch', pars = 'values')
sv[sv$est, ]
# constraint: create function for solnp to compute constraint, and declare value in eqB
eqfun <- function(p, optim_args) sum(p[1:5]) #could use browser() here, if it helps
LB <- c(rep(-15, 6), 1e-4) # more reasonable lower bound for variance term
mod <- mirt(dat, model, sv=sv, itemtype = 'Rasch', optimizer = 'solnp',
solnp_args=list(eqfun=eqfun, eqB=0, LB=LB))
print(mod)
coef(mod)
(ds <- sapply(coef(mod)[1:5], function(x) x[,'d']))
sum(ds)
# same likelihood location as: mirt(dat, 1, itemtype = 'Rasch')
#######
# latent regression Rasch model
# simulate data
set.seed(1234)
N <- 1000
# covariates
X1 <- rnorm(N); X2 <- rnorm(N)
covdata <- data.frame(X1, X2)
Theta <- matrix(0.5 * X1 + -1 * X2 + rnorm(N, sd = 0.5))
# items and response data
a <- matrix(1, 20); d <- matrix(rnorm(20))
dat <- simdata(a, d, 1000, itemtype = '2PL', Theta=Theta)
# unconditional Rasch model
mod0 <- mirt(dat, 1, 'Rasch')
# conditional model using X1 and X2 as predictors of Theta
mod1 <- mirt(dat, 1, 'Rasch', covdata=covdata, formula = ~ X1 + X2)
coef(mod1, simplify=TRUE)
anova(mod0, mod1)
# bootstrapped confidence intervals
boot.mirt(mod1, R=5)
# draw plausible values for secondary analyses
pv <- fscores(mod1, plausible.draws = 10)
pvmods <- lapply(pv, function(x, covdata) lm(x ~ covdata$X1 + covdata$X2),
covdata=covdata)
# population characteristics recovered well, and can be averaged over
so <- lapply(pvmods, summary)
so
# compute Rubin's multiple imputation average
par <- lapply(so, function(x) x$coefficients[, 'Estimate'])
SEpar <- lapply(so, function(x) x$coefficients[, 'Std. Error'])
averageMI(par, SEpar)
############
# Example using Gauss-Hermite quadrature with custom input functions
library(fastGHQuad)
data(SAT12)
data <- key2binary(SAT12,
key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
GH <- gaussHermiteData(50)
Theta <- matrix(GH$x)
# This prior works for uni- and multi-dimensional models
prior <- function(Theta, Etable){
P <- grid <- GH$w / sqrt(pi)
if(ncol(Theta) > 1)
for(i in 2:ncol(Theta))
P <- expand.grid(P, grid)
if(!is.vector(P)) P <- apply(P, 1, prod)
P
}
GHmod1 <- mirt(data, 1, optimizer = 'NR',
technical = list(customTheta = Theta, customPriorFun = prior))
coef(GHmod1, simplify=TRUE)
Theta2 <- as.matrix(expand.grid(Theta, Theta))
GHmod2 <- mirt(data, 2, optimizer = 'NR', TOL = .0002,
technical = list(customTheta = Theta2, customPriorFun = prior))
summary(GHmod2, suppress=.2)
############
# Davidian curve example
dat <- key2binary(SAT12,
key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
dav <- mirt(dat, 1, dentype = 'Davidian-4') # use four smoothing parameters
plot(dav, type = 'Davidian') # shape of latent trait distribution
coef(dav, simplify=TRUE)
fs <- fscores(dav) # assume normal prior
fs2 <- fscores(dav, use_dentype_estimate=TRUE) # use Davidian estimated prior shape
head(cbind(fs, fs2))
itemfit(dav) # assume normal prior
itemfit(dav, use_dentype_estimate=TRUE) # use Davidian estimated prior shape
###########
# 5PL and restricted 5PL example
dat <- expand.table(LSAT7)
mod2PL <- mirt(dat)
mod2PL
# Following does not converge without including strong priors
# mod5PL <- mirt(dat, itemtype = '5PL')
# mod5PL
# restricted version of 5PL (asymmetric 2PL)
model <- 'Theta = 1-5
FIXED = (1-5, g), (1-5, u)'
mod2PL_asym <- mirt(dat, model=model, itemtype = '5PL')
mod2PL_asym
coef(mod2PL_asym, simplify=TRUE)
coef(mod2PL_asym, simplify=TRUE, IRTpars=TRUE)
# no big difference statistically or visually
anova(mod2PL, mod2PL_asym)
plot(mod2PL, type = 'trace')
plot(mod2PL_asym, type = 'trace')
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.