Using the aMNLFA package

The purpose of aMNLFA is to help researchers generate Mplus input files which, when altered to suit each individual researcher's use case, instantiate the steps outlined by Gottfredson et al. (2019).

Please note that the aMNLFA package is provided as a convenient way to generate templates for pieces of code which should be edited, run, and interpreted manually in Mplus. While you can use this package to facilitate the process, all model output must be inspected manually. There are a number of vital pieces of information which must be gleaned from actually looking at the output. For instance, the aMNLFA package does not read in warnings from Mplus about negative standard errors, untrustworthy parameter estimates, and the like. The user must inspect their Mplus inputs and outputs themselves and alter them according to empirical judgment and substantive theory. While we put this package into the scientific community with the aim of making it easier and more convenient for people to do high-quality measurement work, the reality is that the code it generates is not likely to be perfect. Each and every Mplus input file is meant to be checked, and potentially altered, by the user.

See Tweet announcement and website updates from the aMNLFA developers about changes from previous versions.

Summary of bugs fixed in the most recent version

aMNLFA_sample

aMNLFA_initial

aMNLFA_simultaneous

aMNLFA_prune

aMNLFA_final

aMNLFA_scores

Re-running aMNLFA projects with the new version

Notes

Suggested order of operations

  1. Re-run aMNLFA_sample.
  2. Re-run aMNLFA_initial.

  3. Make sure you've deleted all the previous item, varimpactscript, and meanimpactscript .inp and .out files from your folder

  4. Re-run aMNLFA_simultaneous and run the resulting round2calibration.inp file. Some key things to note: -. If you have covariates with interactions, open round2calibration.inp before running it. -. If there is any intercept DIF by covariate interactions being estimated (e.g., ITEM ON AGE_SEX), make sure the lower order main effects are also being estimated (e.g., ITEM ON AGE; ITEM ON SEX). If not, add them manually before running round2calibration.inp. -. If there is any lambda DIF by covariate interactions being estimated (check the lambda label that corresponds to the interaction term), make sure the lower order main effects are also being estimated (find the lambda labels that correspond to the lower order main effects). If not, add them manually before running round2calibration.inp.

  5. Inspect the results from aMNLFA_simultaneous. You can experiment with the new aMNLFA_prune and aMNLFA_DIFplot functions to examine any intercept and lambda DIF as a function of various corrections for multiple comparisons.

  6. Run aMNLFA_final.

  7. This now outputs 2 .csv files into your home directory that will be used for manual checking in Step 8.
    • intercept_dif_from_aMNLFA_final.csv where 1s correspond to the intercept DIF estimated in round3calibration.inp
    • lambda_dif_from_aMNLFA_final.csv where 1s correspond to the lambda DIF estimated in round3calibration.inp
  8. Note: This is the code that generates the final scoring model. If you run round3calibration.inp in MPlus Diagrammer, you will get an image of a path diagram of your final scoring model.

  9. If you have covariates with interactions, open round3calibration.inp before running it.

    • If there is any intercept DIF by covariate interactions being estimated (e.g., ITEM ON AGE_SEX), make sure the lower order main effects are also being estimated (e.g., ITEM ON AGE; ITEM ON SEX). If not, add them manually before running round3calibration.inp.

    • If there is any lambda DIF by covariate interactions being estimated (check the lambda label that corresponds to the interaction term), make sure the lower order main effects are also being estimated (find the lambda labels that correspond to the lower order main effects). If not, add them manually before running round3calibration.inp.

  10. Run aMNLFA_scores. Run the resulting scoring.inp file.

  11. If you have covariates with interactions, open scoring.inp before running it.
    • If there is any intercept DIF by covariate interactions being estimated (e.g., ITEM ON AGE_SEX), make sure the lower order main effects are also being estimated (e.g., ITEM ON AGE; ITEM ON SEX). If not, add them manually before running scoring.inp.
    • If there is any lambda DIF by covariate interactions being estimated (check the lambda label that corresponds to the interaction term), make sure the lower order main effects are also being estimated (find the lambda labels that correspond to the lower order main effects). If not, add them manually before running scoring.inp.

Manual checking

  1. Conduct the new manual checking steps in Excel with the accompanying instructional video to make sure the code is doing what it should be doing with your data
  2. Re-name a "PROJECT X" tab at the bottom of the Excel sheet linked above with your name/project to claim a checking template
  3. Watch the 18-minute video guide for completing this checking worksheet and reference the EXAMPLE tab in the Excel sheet
  4. Complete all 3 steps of checking to verify your project before using the factor scores in any way.
  5. NOTE: the checking sheet focuses on DIF but you are also welcome to check mean/variance impact

  6. Plot the distribution of the factor scores in R to check for outliers and distributional assumptions relating to your substantive models.

Final outputs and reporting

You will find factor scores for your entire sample as "ETA" in scores.dat (with column headers at the bottom of scoring.out) for use in your substantive models. - NOTE: there is no standard error for eta with continuous data (only if you have some ordinal data)

Be sure to control for any mean impact covariates in your substantive model.



vtcole/aMNLFA documentation built on Nov. 7, 2021, 6:11 p.m.