psychmeta-package: 'psychmeta': Psychometric meta-analysis toolkit

psychmeta-packageR Documentation

psychmeta: Psychometric meta-analysis toolkit

Description

Overview of the psychmeta package.

Details

The psychmeta package provides tools for computing bare-bones and psychometric meta-analyses and for generating psychometric data for use in meta-analysis simulations. Currently, psychmeta supports bare-bones, individual-correction, and artifact-distribution methods for meta-analyzing correlations and d values. Please refer to the overview tutorial vignette for an introduction to psychmeta's functions and workflows.

Running a meta-analysis

The main functions for conducting meta-analyses in psychmeta are ma_r for correlations and ma_d for d values. These functions take meta-analytic dataframes including effect sizes and sample sizes (and, optionally, study labels, moderators, construct and measure labels, and psychometric artifact information) and return the full results of psychometric meta-analyses for all of the specified variable pairs. Examples of correctly formatted meta-analytic datasets for ma functions are data_r_roth_2015, data_r_gonzalezmule_2014, and data_r_mcdaniel_1994. Individual parts of the meta-analysis process can also be run separately; these functions are described in detail below.

Preparing a database for meta-analysis

The convert_es function can be used to convert a variety of effect sizes to either correlations or d values. Sporadic psychometric artifacts, such as artificial dichotomization or uneven splits for a truly dichotomous variable, can be individually corrected using correct_r and correct_d. These functions can also be used to compute confidence intervals for observed, converted, and corrected effect sizes. 'Wide' meta-analytic coding sheets can be reformatted to the 'long' data frames used by psychmeta with reshape_wide2long. A correlation matrix and accompanying vectors of information can be similarly reformatted using reshape_mat2dat.

Meta-analytic models

psychmeta can compute barebones meta-analyses (no corrections for psychometric artifacts), as well as models correcting for measurement error in one or both variables, univariate direct (Case II) range restriction, univariate indirect (Case IV) range restriction, bivariate direct range restriction, bivariate indirect (Case V) range restriction, and multivariate range restriction. Artifacts can be corrected individually or using artifact distributions. Artifact distribution corrections can be applied using either Schmidt and Hunter's (2015) interactive method or Taylor series approximation models. Meta-analyses can be computed using various weights, including sample size (default for correlations), inverse variance (computed using either sample or mean effect size; error based on mean effect size is the default for d values), and weight methods imported from metafor.

Preparing artifact distributions meta-analyses

For individual-corrections meta-analyses, reliability and range restriction (u) values should be supplied in the same data frame as the effect sizes and sample sizes. Missing artifact data can be imputed using either bootstrap or other imputation methods. For artifact distribution meta-analyses, artifact distributions can be created automatically by ma_r or ma_d or manually by the create_ad family of functions.

Moderator analyses

Subgroup moderator analyses are run by supplying a moderator matrix to the ma_r or ma_d families of functions. Both simple and fully hierarchical moderation can be computed. Subgroup moderator analysis results are shown by passing an ma_obj to print(). Meta-regression analyses can be run using metareg.

Reporting results and supplemental analyses

Meta-analysis results can be viewed by passing an ma object to summary. Bootstrap confidence intervals, leave one out analyses, and other sensitivity analyses are available in sensitivity. Supplemental heterogeneity statistics (e.g., Q, I^2) can be computed using heterogeneity. Meta-analytic results can be converted between the r and d metrics using convert_ma. Each ma_obj contains a metafor escalc object in ma$...$escalc that can be passed to metafor's functions for plotting, publication/availability bias, and other supplemental analyses. Second-order meta-analyses of correlations can be computed using ma_r_order2. Example second-order meta-analysis datasets from Schmidt and Oh (2013) are available. Tables of meta-analytic results can be written as markdown, Word, HTML, or PDF files using the metabulate function, which exports near publication-quality tables that will typically require only minor customization by the user.

Simulating psychometric meta-analyses

psychmeta can be used to run Monte Carlo simulations for different meta-analytic models. simulate_r_sample and simulate_d_sample simulate samples of correlations and d values, respectively, with measurement error and/or range restriction artifacts. simulate_r_database and simulate_d_database can be used to simulate full meta-analytic databases of sample correlations and d values, respecitively, with artifacts. Example datasets fitting different meta-analytic models simulated using these functions are available (data_r_meas, data_r_uvdrr, data_r_uvirr, data_r_bvdrr, data_r_bvirr, data_r_meas_multi, and data_d_meas_multi). Additional simulation functions are also available.

Author(s)

Maintainer: Jeffrey A. Dahlke jdahlke@humrro.org

Authors:

Other contributors:

  • Wesley Gardiner (Unit tests) [contributor]

  • Michael T. Brannick (Testing) [contributor]

  • Jack Kostal (Code for reshape_mat2dat function) [contributor]

  • Sean Potter (Testing; Code for cumulative and leave1out plots) [contributor]

  • John Sakaluk (Code for funnel and forest plots) [contributor]

  • Yuejia (Mandy) Teng (Testing) [contributor]

See Also

Useful links:


psychmeta documentation built on Aug. 26, 2022, 5:14 p.m.