modsem_da | R Documentation |
modsem_da()
is a function for estimating interaction effects between latent variables
in structural equation models (SEMs) using distributional analytic (DA) approaches.
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator-based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
)
2. Distributionally based approaches ("lms"
, "qml"
).
modsem_da()
handles the latter and can estimate models using both QML and LMS,
necessary syntax, and variables for the estimation of models with latent product indicators.
NOTE: Run default_settings_da
to see default arguments.
modsem_da(
model.syntax = NULL,
data = NULL,
method = "lms",
verbose = NULL,
optimize = NULL,
nodes = NULL,
missing = NULL,
convergence.abs = NULL,
convergence.rel = NULL,
optimizer = NULL,
center.data = NULL,
standardize.data = NULL,
standardize.out = NULL,
standardize = NULL,
mean.observed = NULL,
cov.syntax = NULL,
double = NULL,
calc.se = NULL,
FIM = NULL,
EFIM.S = NULL,
OFIM.hessian = NULL,
EFIM.parametric = NULL,
robust.se = NULL,
R.max = NULL,
max.iter = NULL,
max.step = NULL,
start = NULL,
epsilon = NULL,
quad.range = NULL,
adaptive.quad = NULL,
adaptive.frequency = NULL,
adaptive.quad.tol = NULL,
n.threads = NULL,
algorithm = NULL,
em.control = NULL,
ordered = NULL,
ordered.iter = 100L,
ordered.warmup = 25L,
cluster = NULL,
cr1s = FALSE,
rcs = FALSE,
rcs.choose = NULL,
rcs.scale.corrected = TRUE,
orthogonal.x = NULL,
orthogonal.y = NULL,
auto.fix.first = NULL,
auto.fix.single = NULL,
auto.split.syntax = NULL,
...
)
model.syntax |
|
data |
A dataframe with observed variables used in the model. |
method |
method to use:
|
verbose |
should estimation progress be shown |
optimize |
should starting parameters be optimized |
nodes |
number of quadrature nodes (points of integration) used in |
missing |
How should missing values be handled? If |
convergence.abs |
Absolute convergence criterion. Lower values give better estimates but slower computation. Not relevant when using the QML approach. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached. |
convergence.rel |
Relative convergence criterion. Lower values give better estimates but slower computation. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached. |
optimizer |
optimizer to use, can be either |
center.data |
should data be centered before fitting model |
standardize.data |
should data be scaled before fitting model, will be overridden by
|
standardize.out |
should output be standardized (note will alter the relationships of parameter constraints since parameters are scaled unevenly, even if they have the same label). This does not alter the estimation of the model, only the output. NOTE: It is recommended that you estimate the model normally and then standardize the output using
|
standardize |
will standardize the data before fitting the model, remove the mean
structure of the observed variables, and standardize the output. Note that NOTE: It is recommended that you estimate the model normally and then standardize the output using
|
mean.observed |
should the mean structure of the observed variables be estimated?
This will be overridden by NOTE: Not recommended unless you know what you are doing. |
cov.syntax |
model syntax for implied covariance matrix of exogenous latent variables
(see |
double |
try to double the number of dimensions of integration used in LMS,
this will be extremely slow but should be more similar to |
calc.se |
should standard errors be computed? NOTE: If |
FIM |
should the Fisher information matrix be calculated using the observed or expected values? Must be either |
EFIM.S |
if the expected Fisher information matrix is computed, |
OFIM.hessian |
Logical. If Note, that the Hessian is not always positive definite, and is more computationally expensive to calculate. The OPG should always be positive definite, and a lot faster to compute. If the model is correctly specified, and the sample size is large, then the two should yield similar results, and switching to the OPG can save a lot of time. Note, that the required sample size depends on the complexity of the model. A large difference between Hessian and OPG suggests misspecification, and
|
EFIM.parametric |
should data for calculating the expected Fisher information matrix be
simulated parametrically (simulated based on the assumptions and implied parameters
from the model), or non-parametrically (stochastically sampled)? If you believe that
normality assumptions are violated, |
robust.se |
should robust standard errors be computed, using the sandwich estimator? |
R.max |
Maximum population size (not sample size) used in the calculated of the expected fischer information matrix. |
max.iter |
maximum number of iterations. |
max.step |
maximum steps for the M-step in the EM algorithm (LMS). |
start |
starting parameters. |
epsilon |
finite difference for numerical derivatives. |
quad.range |
range in z-scores to perform numerical integration in LMS using,
when using quasi-adaptive Gaussian-Hermite Quadratures. By default |
adaptive.quad |
should a quasi adaptive quadrature be used? If |
adaptive.frequency |
How often should the quasi-adaptive quadrature be calculated? Defaults to 3, meaning that it is recalculated every third EM-iteration. |
adaptive.quad.tol |
Relative error tolerance for quasi adaptive quadrature. Defaults to |
n.threads |
number of threads to use for parallel processing. If |
algorithm |
algorithm to use for the EM algorithm. Can be either |
em.control |
a list of control parameters for the EM algorithm. See |
ordered |
Variables to be treated as ordered. The scale of the ordinal variables
is scaled to correct for unequal intervals. The underlying continous distributions
are estimated using a Monte-Carlo bootstrap approach. The ordinal values are replaced with
the expected values for each interval. Using |
ordered.iter |
Maximum number of sampling iterations used to sample the underlying continuous distribution of the
ordinal variables. The default is set to |
ordered.warmup |
Number of sampling iterations in the warmup phase. |
cluster |
Clusters used to compute standard errors robust to non-indepence of observations. Must be paired with
|
cr1s |
Logical; if |
rcs |
Should latent variable indicators be replaced with reliability-corrected
single item indicators instead? See |
rcs.choose |
Which latent variables should get their indicators replaced with
reliability-corrected single items? It is passed to |
rcs.scale.corrected |
Should reliability-corrected items be scale-corrected? If |
orthogonal.x |
If |
orthogonal.y |
If |
auto.fix.first |
If |
auto.fix.single |
If |
auto.split.syntax |
Should the model syntax automatically be split into a
linear and non-linear part? This is done by moving the structural model for
linear endogenous variables (used in interaction terms) into the |
... |
additional arguments to be passed to the estimation function. |
modsem_da
object
library(modsem)
# For more examples, check README and/or GitHub.
# One interaction
m1 <- "
# Outer Model
X =~ x1 + x2 +x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner model
Y ~ X + Z + X:Z
"
## Not run:
# QML Approach
est_qml <- modsem_da(m1, oneInt, method = "qml")
summary(est_qml)
# Theory Of Planned Behavior
tpb <- "
# Outer Model (Based on Hagger et al., 2007)
ATT =~ att1 + att2 + att3 + att4 + att5
SN =~ sn1 + sn2
PBC =~ pbc1 + pbc2 + pbc3
INT =~ int1 + int2 + int3
BEH =~ b1 + b2
# Inner Model (Based on Steinmetz et al., 2011)
INT ~ ATT + SN + PBC
BEH ~ INT + PBC
BEH ~ INT:PBC
"
# LMS Approach
est_lms <- modsem_da(tpb, data = TPB, method = "lms")
summary(est_lms)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.