sim: Functions to simulate psychological/psychometric data.

simR Documentation

Functions to simulate psychological/psychometric data.

Description

A number of functions in the psych package will generate simulated data with particular structures. The three documented here are for basic factor analysis simulations. These functions include

sim for a factor simplex
sim.simplex for a data simplex
sim.minor to simulate major and minor factors.

All three of these, and most of the remaining simulation functions return the simulated model, the true scores that generate the data, and the observed data.

Other simulations aimed at special item structures include the following which are are listed here in order to help find them.

Factor based models:

sim.structural a general simulation of structural models.

sim.circ for a circumplex or simple structure for two factors
sim.congeneric for a one factor factor congeneric model.
sim.dichot to simulate dichotomous items
sim.hierarchical to create a hierarchical factor model
sim.omega to test various examples of omega,
sim.item a more general item simulation (including the ability to simulate dichototomous or polytommous items).

simCor to generate sample correlation matrices from a target matrix.

sim.parallel to compare the efficiency of various ways of deterimining the number of factors.

Item Response Theory based models

sim.rasch to create simulated rasch data
sim.irt to create general 1 to 4 parameter IRT data by calling one of the following:
sim.npl 1 to 4 parameter logistic IRT
sim.npn 1 to 4 paramater normal IRT
sim.poly to create polytomous ideas by calling
sim.poly.npn 1-4 parameter polytomous normal theory items
sim.poly.npl 1-4 parameter polytomous logistic items
sim.poly.ideal which creates data following an ideal point or unfolding model by calling
sim.poly.ideal.npn 1-4 parameter polytomous normal theory ideal point model or
sim.poly.ideal.npl 1-4 parameter polytomous logistic ideal point model.

sim.anova for ANOVA and lm simulations

sim.VSS.

Most of these functions are separately documented and are listed here for ease of the help function. See each function for more detailed help.

Usage

sim(fx=NULL,Phi=NULL,fy=NULL,alpha=.8,lambda = 0,n=0,mu=NULL,raw=TRUE,threshold=NULL,
       items = FALSE, low=-2,high=2,cat=5)
sim.simplex(nvar =12, alpha=.8,lambda=0,beta=1,mu=NULL, n=0,threshold=NULL)
sim.minor(nvar=12,nfact=3,n=0,g=NULL,fbig=NULL,fsmall = c(-.2,.2),n.small=NULL, 
    Phi=NULL, bipolar=TRUE, threshold=NULL, 
     items = FALSE, low=-2,high=2,cat=5) 

Arguments

fx

The measurement model for x. If NULL, a 4 factor model is generated

Phi

The structure matrix of the latent variables

fy

The measurement model for y

mu

The means structure for the fx factors, defaults to 0,1,... 3. Set to 0 if using thresholds.

n

Number of cases to simulate. If n=0 or NULL, the population matrix is returned. If n > 0, the latent and observed data are returned.

raw

if raw=TRUE, raw data are returned as well.

threshold

If raw=TRUE (n >0) and threshold is not NULL, convert the data to binary values, cut at threshold.

items

TRUE if simulating items, FALSE if simulating scales

low

Restrict the item difficulties to range from low to high

high

Restrict the item difficulties to range from low to high

cat

Number of categories when creating binary (2) or polytomous items

nvar

Number of variables for a simplex structure

nfact

Number of large factors to simulate in sim.minor,number of group factors in sim.general,sim.omega

g

General factor correlations in sim.general and general factor loadings in sim.omega and sim.minor

alpha

the base correlation for an autoregressive simplex

lambda

the trait component of a State Trait Autoregressive Simplex

beta

Test reliability of a STARS simplex

fbig

Factor loadings for the main factors. Default is a simple structure with loadings sampled from (.8,.6) for nvar/nfact variables and 0 for the remaining. If fbig is specified, then each factor has loadings sampled from it.

bipolar

if TRUE, then positive and negative loadings are generated from fbig

fsmall

nvar/2 small factors are generated with loadings sampled from e.g. (-.2,0,.2)

n.small

If NULL, then generate nvar/2 small factors, else generate n.small small factors

Details

Simulation of data structures is a very useful tool in psychometric research and teaching. By knowing “truth" it is possible to see how well various algorithms can capture it. For a much longer discussion of the use of simulation in psychometrics, see the accompany vignettes.

The simulations documented here are the core set of functions. Others are documented in other help files.

sim simulates one (fx) or two (fx and fy) factor structures where both fx and fy respresent factor loadings of variables. The use of fy is particularly appropriate for simulating sem models. This is better documentated in the help for sim.structural.

The code for sim has been enhanced (10/21/23) to simulate items as well as continuous variables. This matches the code in sim.structural.

Perhaps the easist simulation to understand is just sim. A factor model (fx) and perhaps fy with intercorrelations between the two factor sets of Phi. This will produce a correlation matrix R = fx' phi fy. Factors can differ in their mean values by specifying mu.

With the addition of the treshold parameter, sim will generate dichotomous items where values < threshold become 0, greater become 1. If threshold is a vector less than nvar, then values are sampled from threshold. If the length of threshold = nvar, then each item is dichotomized at threshold.

With the adddition of the items=TRUE option, continuous observed variables are transformed into discrete categories.

The default values for sim.structure is to generate a 4 factor, 12 variable data set with a simplex structure between the factors. This, and the simplex of items (sim.simplex) can also be converted in a STARS model with an autoregressive component (alpha) and a stable trait component (lambda).

Two data structures that are particular challenges to exploratory factor analysis are the simplex structure and the presence of minor factors. Simplex structures, sim.simplex, will typically occur in developmental or learning contexts and have a correlation structure of r between adjacent variables and r^n for variables n apart. Although just one latent variable (r) needs to be estimated, the structure will have nvar-1 factors.

An alternative version of the simplex is the State-Trait-Auto Regressive Structure (STARS) which has both a simplex state structure, with autoregressive path alpha and a trait structure with path lambda. This simulated in sim.simplex by specifying a non-zero lambda value.

Many simulations of factor structures assume that except for the major factors, all residuals are normally distributed around 0. An alternative, and perhaps more realistic situation, is that the there are a few major (big) factors and many minor (small) factors. For a nice discussion of this situation, see Maccallum, Browne and Cai (2007). The challenge is thus to identify the major factors. sim.minor generates such structures. The structures generated can be thought of as having a major factor structure with some small correlated residuals. To make these simulations complete, the possibility of a general factor is considered. For simplicity, sim.minor allows one to specify a set of loadings to be sampled from for g, fmajor and fminor. Alternatively, it is possible to specify the complete factor matrix. By default, the nvar/2 minor factors are given loadings of +/- .2. If the major factors are very big, this leads to impossible models. a warning to try again is issued.

Another structure worth considering is direct modeling of a general factor with several group factors. This is done using sim.general.

Although coefficient \omega_h is a very useful indicator of the general factor saturation of a unifactorial test (one with perhaps several sub factors), it has problems with the case of multiple, independent factors. In this situation, one of the factors is labelled as “general” and the omega estimate is too large. This situation may be explored using the sim.omega function with general left as NULL. If there is a general factor, then results from sim.omega suggests that omega estimated either from EFA or from SEM does a pretty good job of identifying it but that the EFA approach using Schmid-Leiman transformation is somewhat more robust than the SEM approach.

The four irt simulations, sim.rasch, sim.irt, sim.npl and sim.npn simulate dichotomous items following the Item Response model. sim.irt just calls either sim.npl (for logistic models) or sim.npn (for normal models) depending upon the specification of the model.

The logistic model is

P(i,j) = \gamma + \frac{\zeta-\gamma}{1+ e^{\alpha(\delta-\theta)}}

where \gamma is the lower asymptote or guesssing parameter, \zeta is the upper asymptote (normally 1), \alpha is item discrimination and \delta is item difficulty. For the 1 Paramater Logistic (Rasch) model, gamma=0, zeta=1, alpha=1 and item difficulty is the only free parameter to specify.

For the 2PL and 2PN models, a = \alpha and d = \delta are specified.
For the 3PL or 3PN models, items also differ in their guessing parameter c =\gamma.
For the 4PL and 4PN models, the upper asymptote, z= \zeta is also specified.
(Graphics of these may be seen in the demonstrations for the logistic function.)

The normal model (irt.npn) calculates the probability using pnorm instead of the logistic function used in irt.npl, but the meaning of the parameters are otherwise the same. With the a = \alpha parameter = 1.702 in the logistic model the two models are practically identical.

In parallel to the dichotomous IRT simulations are the poly versions which simulate polytomous item models. They have the additional parameter of how many categories to simulate. In addition, the sim.poly.ideal functions will simulate an ideal point or unfolding model in which the response probability varies by the distance from each subject's ideal point. Some have claimed that this is a more appropriate model of the responses to personality questionnaires. It will lead to simplex like structures which may be fit by a two factor model. The middle items form one factor, the extreme a bipolar factor.

By default, the theta parameter is created in each function as normally distributed with mean mu=0 and sd=1. In the case where you want to specify the theta to be equivalent from another simulation or fixed for a particular experimental condition, either take the theta object from the output of a previous simulation, or create it using whatever properties are desired.

The previous functions all assume one latent trait. Alternatively, we can simulate dichotomous or polytomous items with a particular structure using the sim.poly.mat function. This takes as input the population correlation matrix, the population marginals, and the sample size. It returns categorical items with the specified structure.

Other simulation functions in psych that are documented separately includ:

sim.structure A function to combine a measurement and structural model into one data matrix. Useful for understanding structural equation models. Combined with structure.diagram to see the proposed structure.

sim.congeneric A function to create congeneric items/tests for demonstrating classical test theory. This is just a special case of sim.structure.

sim.hierarchical A function to create data with a hierarchical (bifactor) structure. Compare this with directly simulating a bifactor model

sim.item A function to create items that either have a simple structure or a circumplex structure.

sim.circ Create data with a circumplex structure.

sim.dichot Create dichotomous item data with a simple or circumplex structure.

sim.minor Create a factor structure for nvar variables defined by nfact major factors and nvar/2 “minor" factors for n observations.

Although the standard factor model assumes that K major factors (K << nvar) will account for the correlations among the variables

R = FF' + U^2

where R is of rank P and F is a P x K matrix of factor coefficients and U is a diagonal matrix of uniquenesses. However, in many cases, particularly when working with items, there are many small factors (sometimes referred to as correlated residuals) that need to be considered as well. This leads to a data structure such that

R = FF' + MM' + U^2

where R is a P x P matrix of correlations, F is a P x K factor loading matrix, M is a P x P/2 matrix of minor factor loadings, and U is a diagonal matrix (P x P) of uniquenesses.

Such a correlation matrix will have a poor \chi^2 value in terms of goodness of fit if just the K factors are extracted, even though for all intents and purposes, it is well fit.

sim.minor will generate such data sets with big factors with loadings of .6 to .8 and small factors with loadings of -.2 to .2. These may both be adjusted.

sim.parallel Create a number of simulated data sets using sim.minor to show how parallel analysis works. The general observation is that with the presence of minor factors, parallel analysis is probably best done with component eigen values rather than factor eigen values, even when using the factor model.

sim.anova Simulate a 3 way balanced ANOVA or linear model, with or without repeated measures. Useful for teaching research methods and generating teaching examples.

sim.multilevel To understand some of the basic concepts of multilevel modeling, it is useful to create multilevel structures. The correlations of aggregated data is sometimes called an 'ecological correlation'. That group level and individual level correlations are independent makes such inferences problematic. This simulation allows for demonstrations that correlations within groups do not imply, nor are implied by, correlations between group means.

Value

model

The model based correlation matrix (FF' )

reliability

Model based reliability

r

Observed (sample based) correlation matrix

theta

True scores generating the data

N

Sample size

fload

The factor model (F + S) generating the model

Note

The theta values are "true scores" and may be compared to the factor analytic based factor score estimates of the data. This is a useful way to understand factor indeterminancy.

Author(s)

William Revelle

References

MacCallum, Robert C. and Browne, Michael W. and Cai, Li (2007) Factor analysis models as approximations. Factor analysis at 100: Historical developments and future directions. (Cudeck and MacCallum Eds). Lawrence Erlbaum Associates Publishers.

Revelle, W. (in preparation) An Introduction to Psychometric Theory with applications in R. Springer. at https://personality-project.org/r/book/

See Also

See above

Examples

simplex <- sim.simplex() #create the default simplex structure
lowerMat(simplex) #the correlation matrix
#create a congeneric matrix
congeneric <- sim.congeneric()
lowerMat(congeneric)
R <- sim.hierarchical()
lowerMat(R)
#now simulate categorical items with the hierarchical factor structure.  
#Let the items be dichotomous with varying item difficulties.
marginals = matrix(c(seq(.1,.9,.1),seq(.9,.1,-.1)),byrow=TRUE,nrow=2)
X <- sim.poly.mat(R=R,m=marginals,n=1000)
lowerCor(X) #show the raw correlations
#lowerMat(tetrachoric(X)$rho) # show the tetrachoric correlations (not run)
#generate a structure 
fx <- matrix(c(.9,.8,.7,rep(0,6),c(.8,.7,.6)),ncol=2)
fy <- c(.6,.5,.4)
Phi <- matrix(c(1,0,.5,0,1,.4,0,0,0),ncol=3)
R <- sim.structure(fx,Phi,fy) 
cor.plot(R$model) #show it graphically

simp <- sim.simplex()
#show the simplex structure using cor.plot
cor.plot(simp,colors=TRUE,main="A simplex structure")
#Show a STARS model 
simp <- sim.simplex(alpha=.8,lambda=.4)
#show the simplex structure using cor.plot
cor.plot(simp,colors=TRUE,main="State Trait Auto Regressive Simplex" )

dichot.sim <- sim.irt()  #simulate 5 dichotomous items
poly.sim <- sim.poly(theta=dichot.sim$theta)  #simulate 5 polytomous items that correlate 
  #with the dichotomous items
  
  #simulate a bifactor model (compare with sim.hierarchical)

fx <- matrix(c(rep(.5,9),.8,.7,.6,rep(0,9),.8,.7,.6,rep(0,9),.8,.7,.6), ncol=4)
fx   #show the structure
bi <- sim(fx=fx,n=1000)
lowerMat(bi$model)

psych documentation built on June 27, 2024, 5:07 p.m.