test.irt: A simple demonstration (and test) of various IRT scoring...

Description Usage Arguments Details Value Author(s) See Also Examples

Description

Item Response Theory provides a number of alternative ways of estimating latent scores. Here we compare 6 different ways to estimate the latent variable associated with a pattern of responses. Originally developed as a test for scoreIrt, but perhaps useful for demonstration purposes. Items are simulated using sim.irt and then scored using factor scores from factor.scores using statistics found using irt.fa, simple weighted models for 1 and 2 PL and 2 PN. Results show almost perfect agreement with estimates from MIRT and ltm for the dichotomous case and with MIRT for the polytomous case. (Results from ltm are unstable for the polytomous case, sometimes agreeing with scoreIrt and MIRT, sometimes being much worse.)

Usage

1
2
test.irt(nvar = 9, n.obs = 1000, mod = "logistic",type="tetra", low = -3, high = 3,
 seed = NULL)

Arguments

nvar

Number of variables to create (simulate) and score

n.obs

Number of simulated subjects

mod

"logistic" or "normal" theory data are generated

type

"tetra" for dichotomous, "poly" for polytomous

low

items range from low to high

high

items range from low to high

seed

Set the random number seed using some non-nul value. Otherwise, use the existing sequence of random numbers

Details

n.obs observations (0/1) on nvar variables are simulated using either a logistic or normal theory model. Then, a number of different scoring algorithms are applied and shown graphically. Requires the ltm package to be installed to compare ltm scores.

Value

A dataframe of scores as well as the generating theta true score. A graphic display of the correlations is also shown.

Author(s)

William Revelle

See Also

scoreIrt,irt.fa

Examples

1
2
#not run
#test.irt(9,1000)

frenchja/psych documentation built on May 16, 2019, 2:49 p.m.