LCplfm: Latent class probabilistic feature analysis of three-way...

View source: R/plfm.R

LCplfmR Documentation

Latent class probabilistic feature analysis of three-way three-mode binary data

Description

Computation of parameter estimates, standard errors, criteria for model selection, and measures of descriptive fit for disjunctive, conjunctive and additive latent class probabilistic feature models.

Usage

  
LCplfm(data,F=2,T=2,M=5,maprule="disj",emcrit1=1e-3,emcrit2=1e-8,
       model=1,start.objectparameters=NULL,start.attributeparameters=NULL,
       start.sizeparameters=NULL,delta=0.0001,printrun=FALSE,
       update.objectparameters=NULL,update.attributeparameters=NULL,
       Nbootstrap=2000)

Arguments

data

A I X J X K data array of binary observations. Observation (i,j,k) (i=1,..,I; j=1,..,J; k=1,..,K) indicates whether object j is associated to attribute k according to rater i.

F

The number of latent features included in the model.

T

The number of latent classes included in the model.

M

The number of times a particular model is estimated using random starting points.

maprule

Disjunctive (maprule="disj"), conjunctive (maprule="conj") or additive (maprule="add") mapping rule of the probabilistic latent feature model.

emcrit1

Convergence criterion to be used for the estimation of candidate models.

emcrit2

Convergence criterion to be used for the estimation of the best model.

model

The type of dependency and heterogeneity assumption included in the model. model=1, model=2, model=3 represent models with a constant object-feature classification per person and with, respectively, class-specific object parameters, class-specific attribute parameters, and class-specific object- and attribute parameters. model=4, model=5, model=6 represent models with a constant attribute-feature classification per person and with, respectively, class-specific object parameters, class-specific attribute parameters, and class-specific object- and attribute parameters.

start.objectparameters

An array of object parameters to be used as starting value for each run. The size of the array equals J x F x T x M when model = 1,4,3,6 and J x F x M when model = 2,5. If start.objectparameters=NULL randomly generated object parameters are used as starting values.

start.attributeparameters

An array of attribute parameters to be used as starting value for each run. The size of the array equals K x F x T x M when model = 2,5,3,6 and K x F x M when model = 1,3. If start.attributeparameters=NULL randomly generated attribute parameters are used as starting values.

start.sizeparameters

A T x M matrix of latent class size parameters to be used as starting value for each run. If start.sizeparameters=NULL randomly generated class size parameters are used as starting values.

delta

The precision used to compute standard errors of the parameters with the method of finite differences.

printrun

printrun=TRUE prints the analysis type (disjunctive, conjunctive, additive), the number of features (F), the number of latent classes (T) and the number of the run to the output screen, whereas printrun=FALSE suppresses the printing.

update.objectparameters

A binary valued array that indicates for each object parameter whether it has to be estimated from the data or constrained to the starting value. A value of 1 means that the corresponding object parameter is estimated and a value of 0 means that the corresponding object parameter is constrained to the starting value provided by the user. The size of the array equals J x F x T when model = 1,4,3,6 and J x F when model = 2,5. If update.objectparameters = NULL all object parameters are estimated from the data.

update.attributeparameters

A binary valued array that indicates for each attribute parameter whether it has to be estimated from the data or constrained to the starting value. A value of 1 means that the corresponding attribute parameter is estimated and a value of 0 means that the corresponding attribute parameter is constrained to the starting value provided by the user. The size of the array equals K x F x T when model = 2,5,3,6 and K x F when model = 1,3. If update.attributeparameters = NULL all attribute parameters are estimated from the data.

Nbootstrap

Number of bootstrap iterations to be used for simulating the reference distribution of odds-ratio dependency measures.

Details

Estimation The estimation algorithm includes two steps. In a first exploratory step an EM algorithm is used to conduct M runs using random starting points. Each exploratory run is terminated if the convergence criterium (i.e., the sum of absolute differences between parameter values in subsequent iterations) is smaller than emcrit1. In a second step, the best solution among the M runs (i.e., with the highest posterior density) is used as the starting point of the EM algorithm for conducting a final analysis. The final analysis is terminated if the convergence criterion emcrit2 is smaller than the convergence criterion.

Model selection criteria, goodness-of-fit and statistical dependency measures

To choose among models with different numbers of features, or with different mapping rules, one may use information criteria such as the Akaike Information Criterion (AIC, Akaike, 1973, 1974), or the Schwarz Bayesian Information Criterion (BIC, Schwarz, 1978). AIC and BIC are computed as -2*loglikelihood+k*Npar. For AIC k equals 2 and for BIC k equals log(N), with N the observed number of replications (I) for which object-attribute associations are collected. Npar represents the number of model parameters. Models with the lowest value for AIC or BIC should be selected.

The descriptive goodness-of-fit of the model is assessed with the correlation between observed and expected frequencies in the J X K table, and the proportion of the variance in the observed frequencies accounted for by the model (VAF) (i.e. the squared correlation between observed and expected frequencies).

To assess to which extent the model can capture observed statistical dependencies between object-attribute pairs with a common object or attribute, a parametric bootstrap procedure is used to evaluate whether observed dependencies are within the simulated 95 or 99 percent confidence interval. Let D(i,j,k) be equal to 1 if rater i indicates that object j is associated to attribute k. The statistical dependency between pairs (j,k) and (j*,k*) is measured with the odds ratio (OR) statistic:

OR(j,k,j*,k*)=log((N11*N00)/(N10*N01))

with

N11=∑_i D(i,j,k) D(i,j*,k*) +0.5

N00=∑_i (1-D(i,j,k)) (1-D(i,j*,k*)) +0.5

N10=∑_i D(i,j,k) (1-D(i,j*,k*)) +0.5

N01=∑_i (1-D(i,j,k)) D(i,j*,k*) +0.5

The model selection criteria AIC and BIC, the descriptive goodness-of-fit measures (correlation observed and expected frequencies, and VAF) and a summary of the OR dependency measures (i.e., proportion of observed OR dependencies of a certain type that are in the simulated 95 or 99 percent confidence interval) are stored in the object fitmeasures of the output list

Value

call

Parameters used to call the function.

logpost.runs

A list with the logarithm of the posterior density for each of the M computed models.

best

An index which indicates the model with the highest posterior density among each of the M computed models.

objpar

Estimated object parameters for the best model.

attpar

Estimated attribute parameters for the best model.

sizepar

A vector of T class size parameters for the best model.

SE.objpar

Estimated standard errors for the object parameters of the best model.

SE.attpar

Estimated standard errors for the attribute parameters of the best model.

SE.sizepar

Estimated standard errors for the class size parameters of the best model.

gradient.objpar

Gradient of the object parameters for the best model.

gradient.attpar

Gradient of the attribute parameters for the best model.

gradient.sizepar

Gradient of the class size parameters for the best model.

fitmeasures

A list of model selection criteria, goodness-of-fit measures and OR dependency measures for the model with the highest posterior density.

postprob

A I X T matrix of posterior probabilities for the best model.

margprob.JK

A J X K matrix of marginal object-attribute association probabilities.

condprob.JKT

A J X K X T array of conditional object-attribute association probabilities (i.e., probability of object-attribute association given latent class membership).

report.OR.attpair

A matrix that contains for all attribute pairs per object the observed OR dependency (OR.obs), the expected OR dependency (OR.mean) and the upper and lower bounds of the corresponding simulated 95 and 99 percent confidence interval (OR.p025, OR.p975, OR.p005, OR.p995).

report.OR.objpair

A matrix that contains for all object pairs per attribute the observed OR dependency (OR.obs), the expected OR dependency (OR.mean) and the upper and lower bounds of the corresponding simulated 95 and 99 percent confidence interval (OR.p025, OR.p975, OR.p005, OR.p995).

Author(s)

Michel Meulders and Philippe De Bruecker

References

Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B. N. Petrov and F. Csaki (Eds.), Second international symposium on information theory (p. 271-283). Budapest: Academiai Kiado.

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716-723.

Candel, M. J. J. M., and Maris, E. (1997). Perceptual analysis of two-way two-mode frequency data: probability matrix decomposition and two alternatives. International Journal of Research in Marketing, 14, 321-339.

Louis, T. A. (1982). Finding observed information using the em algorithm. Journal of the Royal Statistical Society, Series B, 44, 98-130.

Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187-212.

Maris, E., De Boeck, P., and Van Mechelen, I. (1996). Probability matrix decomposition models. Psychometrika, 61, 7-29.

Meulders, M., De Boeck, P., and Van Mechelen, I. (2001). Probability matrix decomposition models and main-effects generalized linear models for the analysis of replicated binary associations. Computational Statistics and Data Analysis, 38, 217-233.

Meulders, M., De Boeck, P., Van Mechelen, I., Gelman, A., and Maris, E. (2001). Bayesian inference with probability matrix decomposition models. Journal of Educational and Behavioral Statistics, 26, 153-179.

Meulders, M., De Boeck, P., Van Mechelen, I., and Gelman, A. (2005). Probabilistic feature analysis of facial perception of emotions. Applied Statistics, 54, 781-793.

Meulders, M. and De Bruecker, P. (2018). Latent class probabilistic latent feature analysis of three-way three-mode binary data. Journal of Statistical Software, 87(1), 1-45.

Meulders, M. (2013). An R Package for Probabilistic Latent Feature Analysis of Two-Way Two-Mode Frequencies. Journal of Statistical Software, 54(14), 1-29. URL http://www.jstatsoft.org/v54/i14/.

Meulders, M., Tuerlinckx, F., and Vanpaemel, W. (2013). Constrained multilevel latent class models for the analysis of three-way three-mode binary data. Journal of Classification, 30 (3), 306-337.

Tanner, M. A. (1996). Tools for statistical inference: Methods for the exploration of posterior distributions and likelihood functions (Third ed.). New York: Springer-Verlag.

Tatsuoka, K. (1984). Analysis of errors in fraction addition and subtraction problems. Final Report for NIE-G-81-0002, University of Illinois, Urbana-Champaign.

Schwarz, G. (1978). Estimating the dimensions of a model. Annals of Statistics, 6, 461-464.

See Also

print.LCplfm

Examples


## Not run: 

# example 1: analysis on determinants of anger-related behavior

# load anger data
data(anger)

# estimate a disjunctive LCplfm model with F=2 and T=2 
# assume constant situation-feature classification
# and class-specific situation parameters (i.e. model 1)
# use 10 exploratory runs with random starting points 
anger.LCplfm.disj<-LCplfm(data=anger$data,F=2, T=2, M=10)

# print the output of the model 
print (anger.LCplfm.disj)


# estimate an additive LCplfm model with F=2 and T=2 
# assume constant situation-feature classification
# and class-specific situation parameters (i.e. model 1)
# use 10 exploratory runs with random starting points 
anger.LCplfm.add<-LCplfm(data=anger$data,F=2, T=2, M=10, maprule="add")

# print the output of the model 
print (anger.LCplfm.add)


# estimate a disjunctive LCplfm model with F=4 and T=2
# assume constant situation-feature classifications
# and class-specific situation parameters (i.e. model 1)
# use 20 exploratory runs with random starting points (M=20)
# constrain parameters of subsequent behavior pairs to "load"
# on only one feature

# specify which attribute parameters have to be estimated from the data
update.attribute<-matrix(rep(0,8*4),ncol=4)
update.attribute[1:2,1]<-c(1,1)
update.attribute[3:4,2]<-c(1,1)
update.attribute[5:6,3]<-c(1,1)
update.attribute[7:8,4]<-c(1,1)

# specify starting values for attribute parameters in each of M=20 runs
# for parameters with update.attribute==0 starting values are constrained to 1e-6
# for parameters with update.attribute==1 starting values are sampled from a unif(0,1)
start.attribute<-array(runif(8*4*20),c(8,4,20))
start.attribute[update.attribute%o%rep(1,20)==0]<-1e-6 

# estimate the constrained model
anger.LCplfm.constr<-LCplfm(data=anger$data,F=4, T=2, M=20, 
                     update.attributeparameters=update.attribute,
                     start.attributeparameters=start.attribute)

# estimate a disjunctive LCplfm model with F=4 and T=2
# assume constant situation-feature classifications
# class-specific situation and bahavior parameters (i.e. model 3)
# use 20 exploratory runs with random starting points (M=20)
# constrain parameters of subsequent behavior pairs to "load"
# on only one feature

# specify which attribute parameters have to be estimated from the data
 
update.attribute<-matrix(rep(0,8*4),ncol=4)
update.attribute[1:2,1]<-c(1,1)
update.attribute[3:4,2]<-c(1,1)
update.attribute[5:6,3]<-c(1,1)
update.attribute[7:8,4]<-c(1,1)
update.attribute<-update.attribute%o%rep(1,2)

# specify starting values for attribute parameters in each of M=20 runs
# for parameters with update.attribute==0 starting values are constrained to 1e-6
# for parameters with update.attribute==1 starting values are sampled from a unif(0,1)
start.attribute<-array(runif(8*4*2*20),c(8,4,2,20))
start.attribute[update.attribute%o%rep(1,20)==0]<-1e-6 

# estimate the constrained model
anger.LCplfm.m3.constr<-LCplfm(data=anger$data,F=4, T=2, M=20, model=3, 
                     update.attributeparameters=update.attribute,
                     start.attributeparameters=start.attribute)


## End(Not run)

## Not run: 
# example 2: analysis of car perception data

# load car data
data(car)

# estimate a disjunctive LCplfm with F=3 and T=2
# assume constant attribute-feature classification
# and class-specific car parameters (i.e. model 4)
# use 10 exploratory runs with random starting points 
car.LCplfm.disj<-LCplfm(data=car$data3w,F=3, T=2, M=10,model=4)

# print the output of the model 
print(car.LCplfm.disj)

# estimate an additive LCplfm with F=3 and T=2
# assume constant attribute-feature classification
# and class-specific car parameters (i.e. model 4)
# use 10 exploratory runs with random starting points 
car.LCplfm.add<-LCplfm(data=car$data3w,F=3, T=2, M=10, model=4, maprule="add")

# print the output of the model 
print(car.LCplfm.add)


## End(Not run)

## Not run: 

# example 3: estimation of multiple classification latent class 
# model (Maris, 1999) for cognitive diagnosis


# load subtraction data
library(CDM)
data(fraction.subtraction.data)
data(fraction.subtraction.qmatrix)


# create three-way data as input for LCplfm
I<-536
J<-1
K<-20
data3w<-array(c(as.matrix(fraction.subtraction.data)),c(I,J,K))

# add item labels

itemlabel<-c("5/3 - 3/4", 
"3/4 - 3/8", 
"5/6 - 1/9",
"3 1/2 - 2 3/2", 
"4 3/5 - 3 4/10", 
"6/7 - 4/7", 
"3 - 2 1/5", 
"2/3 - 2/3", 
"3 7/8 - 2", 
"4 4/12 - 2 7/12", 
"4 1/3 - 2 4/3", 
"1 1/8 - 1/8", 
"3 3/8 - 2 5/6", 
"3 4/5 - 3 2/5", 
"2 - 1/3", 
"4 5/7 - 1 4/7", 
"7 3/5 - 4/5", 
"4 1/10 - 2 8/10", 
"4 - 1 4/3", 
"4 1/3 - 1 5/3") 

dimnames(data3w)[[3]]<-itemlabel

# estimate multiple classification latent class model (Maris, 1999)

set.seed(537982)
subtract.m1.lst<-stepLCplfm(data3w,minF=3,maxF=5,minT=1,maxT=3,model=1,M=20,maprule="conj")


# print BIC values
sumar<-summary(subtract.m1.lst)
as.matrix(sort(sumar[,5]))

# print output best model
subtract.m1.lst[[5,2]]

# correlation between extracted skills and qmatrix
round(cor(fraction.subtraction.qmatrix,subtract.m1.lst[[5,2]]$attpar),2)

## End(Not run)

plfm documentation built on March 30, 2022, 5:08 p.m.