Mendelian randomization vignette"

MendelianRandomization

MendelianRandomization is a package developed to carry out various Mendelian randomization analyses on summarized genetic data in R. The package uses various methods to assess whether a risk factor (also called an exposure) has a causal effect on an outcome.

library(MendelianRandomization)

The Input

The package uses a special class called MRInput within the analyses in order to pass in all necessary information through one simple structure rather than inserting the object in parts. In order to make an MRInput object, one can do one of the following :

We focus on the first method.

The MRInput object has the following "slots" :

To generate the MRInput object slot by slot, one can use the mr_input() function :

MRInputObject <- mr_input(bx = ldlc, 
                          bxse = ldlcse, 
                          by = chdlodds, 
                          byse = chdloddsse)

MRInputObject  # example with uncorrelated variants

MRInputObject.cor <- mr_input(bx = calcium, 
                             bxse = calciumse, 
                             by = fastgluc, 
                             byse = fastglucse,
                             corr = calc.rho)

MRInputObject.cor  # example with correlated variants

It is not necessary for all the slots to be filled. For example, several of the methods do not require bxse to be specified; the mr_ivw function will still run with bxse set to zeros. If the vectors bx, bxse, by, and byse are not of equal length, then an error will be reported.

It is also possible to run the analysis using the syntax:

MRInputObject <- mr_input(ldlc, ldlcse, chdlodds, chdloddsse)

However, care must be taken in this case to give the vectors in the correct order (that is: bx, bxse, by, byse).

The data

Two sets of data are provided as part of this package:

Methods

The MendelianRandomization package supports three main methods for causal estimation: the inverse-variance weighted method, the median-based method, and the MR-Egger method.

Inverse-variance weighted method

The inverse-variance method is the equivalent to the standard IV method using individual-level data (the two-stage least squares method). Either a fixed- or a random-effects analysis can be performed; the "default" option is a fixed-effect analysis when there are three variants or fewer, and a random-effects analysis otherwise. The robust option uses robust regression rather than standard regression in the analysis, and the penalized option downweights the contribution to the analysis of genetic variants with outlying (heterogeneous) causal estimates. If a correlation matrix is provided in the MRInput object, then the correlated method is used by default (correl = TRUE), and the robust and penalized arguments are ignored.

The default options for constructing confidence intervals are based on a normal distribution and a 95% confidence level, however one can use the t-distribution (distribution = "t-dist") and alternative significance level if desired.

ADDED IN VERSION 0.2.0 The default option for the weights is "simple", which corresponds to an inverse-variance weighted meta-analysis of the ratio estimates by/bx using standard errors byse/bx. The expression for the standard error of the ratio estimate is the first-order term of a delta expansion for the variance of a ratio. The corresponding weights in the regression analysis are byse\^-2. These weights have been demonstrated to give reasonable inferences in realistic settings. However, they do not take into account uncertainty in the genetic associations with the exposure. Alternatively, the second-order term of the delta expansion is sqrt(byse\^2/bx\^2+by\^2bxse\^2/bx\^4-2psibybxsebyse/bx\^3), where psi is the correlation for each variant between its association with the exposure and with the outcome. This correlation will be zero (the default value) in a strict two-sample setting in which there is no overlap between the individuals used to estimate associations with the exposure and with the outcome. Paradoxically, although the standard error expression implied by the second-order weights (obtained by setting weights equal to "delta"*) is larger, resulting in wider confidence intervals for a single genetic variant, the use of second-order weights often leads to less heterogeneity between the estimates for the individual variants, and so in narrower confidence intervals.

IVWObject <- mr_ivw(MRInputObject,
                    model = "default",
                    robust = FALSE,
                    penalized = FALSE,
                    correl = FALSE,
                    weights = "simple",
                    psi = 0,
                    distribution = "normal",
                    alpha = 0.05)

IVWObject <- mr_ivw(mr_input(bx = ldlc, bxse = ldlcse,
   by = chdlodds, byse = chdloddsse))

IVWObject

IVWObject.correl <- mr_ivw(MRInputObject.cor,
                    model = "default",
                    correl = TRUE,
                    distribution = "normal",
                    alpha = 0.05)

IVWObject.correl <- mr_ivw(mr_input(bx = calcium, bxse = calciumse,
   by = fastgluc, byse = fastglucse, corr = calc.rho))

IVWObject.correl

Median-based method

The median-based method calculates a median of the SNP-specific causal estimates from the ratio method for each genetic variant individually. The default option is to calculate a weighted median using the inverse-variance weights. Alternatively, one can calculate a simple (unweighted) median, or a weighted median using penalization of weights for heterogeneous variants. Since the calculation of standard error requires bootstrapping, the number of bootstrap iterations can be varied. The random seed is set automatically so that results are reproducible; however, the value of the seed can be changed if required.

The median-based method requires data on at least 3 genetic variants. Variants must be uncorrelated provided that the correlation matrix is specified.

WeightedMedianObject <- mr_median(MRInputObject, 
                                  weighting = "weighted", 
                                  distribution = "normal", 
                                  alpha = 0.05, 
                                  iterations = 10000, 
                                  seed = 314159265)

WeightedMedianObject <- mr_median(mr_input(bx = ldlc, bxse = ldlcse,
  by = chdlodds, byse = chdloddsse))

WeightedMedianObject 

SimpleMedianObject <- mr_median(mr_input(bx = ldlc, bxse = ldlcse,
  by = chdlodds, byse = chdloddsse), weighting = "simple")

SimpleMedianObject

MR-Egger method

The MR-Egger method is implemented here using a random-effects model only. The robust and penalized options are the same as for the inverse-variance weighted method. The method can be used for both correlated and uncorrelated sets of variants. Confidence intervals can be constructed either using a normal distribution (distribution = "normal", the default option), or a t-distribution (distribution = "t-dist").

With a t-distribution, in case of under-dispersion (the estimated residual standard error in the regression model is less than 1), confidence intervals and p-values use either a t-distribution with no correction for under-dispersion, or a normal distribution with the residual standard error set to 1 -- whichever is wider. This means that under-dispersion is not doubly penalized by setting the residual standard error to 1 and using a t-distribution, but also that the confidence intervals are not narrower (p-values not more extreme) than those using a fixed-effect model.

The median-based method requires data on at least 3 genetic variants. Variants are permitted to be correlated.

EggerObject <- mr_egger(MRInputObject, 
                        robust = FALSE,
                        penalized = FALSE,
                        correl = FALSE,
                        distribution = "normal",
                        alpha = 0.05)

EggerObject <- mr_egger(mr_input(bx = ldlc, bxse = ldlcse,
  by = chdlodds, byse = chdloddsse))

EggerObject

EggerObject.corr <- mr_egger(MRInputObject.cor, 
                        correl = TRUE,
                        distribution = "normal",
                        alpha = 0.05)

EggerObject.corr <- mr_egger(mr_input(bx = calcium, bxse = calciumse,
  by = fastgluc, byse = fastglucse, corr = calc.rho))

EggerObject.corr

Maximum likelihood method

ADDED IN VERSION 0.2.0 An additional method for causal estimation is the maximum likelihood method, introduced in Burgess et al (2013) "Mendelian randomization analysis with multiple genetic variants using summarized data", doi: 10.1002/gepi.21758. This method has not been used much in the literature, due to its similarity to the inverse-variance weighted method (both methods use the same underlying model), and its relative complexity: the method involves maximizing a likelihood that has one parameter for each genetic variant, plus a causal effect parameter. For large numbers of genetic variants, this entails optimization over a large parameter space, which can lead to numerical issues in the optimization algorithm.

The method naturally assumes a fixed-effect model -- the same causal effect is estimated by each of the genetic variants. However, if there is heterogeneity between the causal estimates from the different genetic variants, then confidence intervals from a fixed-effect model will be overly precise. We implement a random-effects model by multiplying the standard error from the fixed-effect model by an estimate of the residual standard error obtained: the likelihood ratio heterogeneity test statistic divided by the number of genetic variants less 1, and then square rooted (provided this quantity is greater than 1, otherwise no modification to the standard error is made). This is similar to the residual standard error in a regression model, which can be calculated from the Cochran Q heterogeneity test statistic by the same steps. As with the IVW method, the default option is to implement a fixed-effect model with three variants or fewer, and a random-effects analysis otherwise.

The method has two main advantages over the IVW method: it allows for uncertainty in the genetic associations with the exposure (which is ignored in the IVW method using simple weights), and it allows for genetic associations with the exposure and with the outcome for each variant to be correlated. This correlation arises if the samples for the associations with the exposure and the outcome overlap. In a strict two-sample Mendelian randomization setting, the correlation parameter psi is set to zero (the default option). If the associations are estimated in the same individuals (complete sample overlap), then the value of psi should be the observed correlation between the exposure and the outcome.

MaxLikObject <- mr_maxlik(MRInputObject, 
                          model = "default",
                          correl = FALSE,
                          psi = 0,
                          distribution = "normal",
                          alpha = 0.05)

MaxLikObject <- mr_maxlik(mr_input(bx = ldlc, bxse = ldlcse,
  by = chdlodds, byse = chdloddsse))

MaxLikObject

MaxLikObject.corr <- mr_maxlik(mr_input(bx = calcium, bxse = calciumse,
  by = fastgluc, byse = fastglucse, corr = calc.rho))

MaxLikObject.corr

Multivariable Mendelian randomization

ADDED IN VERSION 0.3.0 An additional method for causal estimation when genetic variants are associated with multiple risk factors is multivariable Mendelian randomization, introduced in Burgess and Thompson (2015) "Multivariable Mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects", doi: 10.1093/aje/kwu283.

There are two contexts in which we envision the method being used. First, when there is a set of related risk factors, such as lipid fractions. It is difficult to find genetic predictors of HDL-cholesterol that do not also predict LDL-cholesterol and/or triglycerides. Multivariable Mendelian randomization allows genetic variants to be associated with all the risk factors in the model provided that they do not influence the outcome via other pathways. Secondly, when there is a network of risk factors, typically a primary risk factor and a secondary risk factor or mediator. In both cases, causal estimates reflect direct causal effects - the result of intervening on the risk factor under analysis while keeping other risk factors in the model fixed.

As the multivariable method requires genetic associations with multiple risk factors, a different input function is needed: mr_mvinput:

MRMVInputObject <- mr_mvinput(bx = cbind(ldlc, hdlc, trig),
                              bxse = cbind(ldlcse, hdlcse, trigse),
                              by = chdlodds, 
                              byse = chdloddsse)

MRMVInputObject

The mr_mvivw command performs an extension of the inverse-variance weighted method for multivariable Mendelian randomization, performing multivariable weighted linear regression (the standard IVW method method performs univariable weighted linear regression) with the intercept term set to zero. As with the standard IVW method, a correlation matrix can be specified. Multivariable Mendelian randomization requires the number of genetic variants to exceed the number of risk factors; if this is not the case, the method will return an error.

MRMVObject <- mr_mvivw(MRMVInputObject, 
                          model = "default",
                          correl = FALSE,
                          distribution = "normal",
                          alpha = 0.05)

MRMVObject <- mr_mvivw(MRMVInputObject)

MRMVObject

Mode-based estimation method of Hartwig et al

ADDED IN VERSION 0.3.0 An additional method for robust causal estimation that gives consistent when a plurality of genetic variants is valid is the mode-based estimation method, introduced in Hartwig, Davey Smith and Bowden (2017) "Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption", doi: 10.1093/ije/dyx102.

While the median-based method calculates the variant-specific estimates and then obtains the mean of these estimates, the mode-based method obtains the "mode" of the estimates. However, in finite samples, no two estimates will be exactly the same, so a mode does not exist. The method proceeds by constructing a kernel-weighted density of the variant-specific estimates, and taking the maximum point of this density as the point estimate. A confidence interval is obtained by bootstrapping.

Several options can be specified: whether the mode should be "weighted" or "unweighted" (default is weighted), whether the standard errors should be calculated using the "simple" or "delta" formula above (default is delta), the value of the bandwidth parameter multiplication factor (default is 1), the random seed, and number of iterations in the bootstrap procedure.

MBEObject <- mr_mbe(MRInputObject, 
                    weighting = "weighted",
                    stderror = "delta",
                    phi = 1,
                    seed = 314159265,
                    iterations = 10000,
                    distribution = "normal",
                    alpha = 0.05)

MBEObject <- mr_mbe(mr_input(bx = ldlc, bxse = ldlcse,
  by = chdlodds, byse = chdloddsse))

MBEObject

Heterogeneity-penalized method

ADDED IN VERSION 0.3.0 An additional method for robust causal estimation that also gives consistent when a plurality of genetic variants is valid is the heterogeneity-penalized model-averaging method, introduced in Burgess et al (2017) "Improving on a modal-based estimation method: model averaging for consistent and efficient estimation in Mendelian randomization when a plurality of candidate instruments are valid", doi: 10.1101/175372.

There are several aspects of the mode-based estimation method that could be improved upon. The choice of bandwidth is arbitrary and has a large effect on estimates and their confidence intervals. The bootstrap procedure is not ideal, as it makes the assumption that the estimate is normally distributed. The method is not particularly efficient.

The heterogeneity-penalized method uses the same consistency criterion as the mode-based estimation method, but evaluates the modal estimate in a different way. It proceeds by evaluating weights for all subsets of genetic variants (excluding the null set and singletons). Subsets receive greater weight if they include more variants, but are severely downweighted if the variants in the subset have heterogeneous causal estimates. As such, the method will identify the subset with the largest number (by weight) of variants having similar causal estimates.

Confidence intervals are evaluated by calculating a log-likelihood function, and finding all points within a given vertical distance of the maximum of the log-likelihood function (which is the causal estimate). As such, if the log-likelihood function is multimodal, then the confidence interval may include multiple disjoint ranges. This may indicate the presence of multiple causal mechanisms by which the exposure may influence the outcome with different magnitudes of causal effect. As the confidence interval is determined by a grid search, care must be taken when chosing the minimum (CIMin) and maximum (CIMax) values in the search, as well as the step size (CIStep). The default values will not be suitable for all applications.

HetPenObject <- mr_hetpen(MRInputObject, 
                          prior = 0.5,
                          CIMin = -1,
                          CIMax = 1,
                          CIStep = 0.001,
                          alpha = 0.05)

As the method evaluates estimates and weights for each subset of variants, the method is quite computationally expensive to run, as the complexity doubles with each additional variant. We run the method for a subset of 10 genetic variants:

HetPenObject <- mr_hetpen(mr_input(bx = ldlc[1:10], bxse = ldlcse[1:10],
  by = chdlodds[1:10], byse = chdloddsse[1:10]), CIMin = -1, CIMax = 5, CIStep = 0.01)

HetPenObject

As an example of a multimodal confidence interval:

bcrp    =c(0.160, 0.236, 0.149, 0.09, 0.079, 0.072, 0.047, 0.069)
bcrpse  =c(0.006, 0.009, 0.006, 0.005, 0.005, 0.005, 0.006, 0.011)
bchd    =c(0.0237903, -0.1121942, -0.0711906, -0.030848, 0.0479207, 0.0238895,
  0.005528, 0.0214852)
bchdse  =c(0.0149064, 0.0303084, 0.0150552, 0.0148339, 0.0143077, 0.0145478,
  0.0160765, 0.0255237)

HetPenObject.multimode <- mr_hetpen(mr_input(bx = bcrp, bxse = bcrpse,
  by = bchd, byse = bchdse))

HetPenObject.multimode

Summaries of multiple methods

The mr_allmethods function is provided to easily compare results (Estimate, Standard Error, 95% CI, and p-value) from multiple methods. One can look at results from all methods (method = "all"), or a partial result setting method to "egger", "ivw", or "median" to get a more selective set of results. The final option is "main", which gives results from the simple median, weighted median, IVW, and MR-Egger methods only.

MRInputObject <- mr_input(bx = ldlc, 
                          bxse = ldlcse, 
                          by = chdlodds, 
                          byse = chdloddsse)

MRAllObject_all <- mr_allmethods(MRInputObject, method = "all")
MRAllObject_all

MRAllObject_egger <- mr_allmethods(MRInputObject, method = "egger")
MRAllObject_egger

MRAllObject_main <- mr_allmethods(MRInputObject, method = "main")
MRAllObject_main

Graphical summaries of results

Applied to MRInput object - display of data

The mr_plot function has two different functionalities. First, if the function is applied to an MRInput object, then the output is an interactive graph that can be used to explore the associations of the different genetic variants.

The syntax is:

mr_plot(mr_input(bx = ldlc, bxse = ldlcse, by = chdlodds, byse = chdloddsse),
 error = TRUE, orientate = FALSE, line = "ivw")

An interactive graph does not reproduce well in a vignette, so we encourage readers to input this code for themselves. The interactive graph allows the user to pinpoint outliers easily; when the user mouses over one of the points, the name of the variant is shown.

The option error = TRUE plots error bars (95% confidence intervals) for the associations with the exposure and with the outcome. The option orientate = TRUE sets all the associations with the exposure to be positive, and re-orientates the associations with the outcome if needed. This option is encouraged for the MR-Egger method (as otherwise points having negative associations with the exposure can appear to be far from the regression line), although by default it is set to FALSE. The line option can be set to "ivw" (to show the inverse-variance weighted estimate) or to "egger" (to show the MR-Egger estimate).

ADDED IN VERSION 0.2.0 A static version of this graph is also available, by setting interactive = FALSE:

mr_plot(mr_input(bx = ldlc, bxse = ldlcse, by = chdlodds, byse = chdloddsse),
 error = TRUE, orientate = FALSE, line = "ivw", interactive = FALSE)

This version of the graph is less useful for detecting outliers, but easier to save as a file and share with colleagues.

Outliers can be detected using the labels = TRUE option:

mr_plot(mr_input(bx = ldlc, bxse = ldlcse, by = chdlodds, byse = chdloddsse),
 error = TRUE, orientate = FALSE, line = "ivw", interactive = FALSE, labels = TRUE)

The resulting graph is quite ugly, but it is easy to identify the individual points.

Applied to MRAll object - comparison of estimates

Secondly, if the mr_plot function is applied to the output of the mr_allmethods function, estimates from the different methods can be compared graphically.

mr_plot(MRAllObject_all)
mr_plot(MRAllObject_egger)
mr_plot(mr_allmethods(mr_input(bx = hdlc, bxse = hdlcse,
  by = chdlodds, byse = chdloddsse)))

We see that estimates from all methods are similar when LDL-cholesterol is the risk factor, but the MR-Egger estimates differ substantially when HDL-cholesterol is the risk factor.

Extracting association estimates from publicly available datasets

The PhenoScanner bioinformatic tool (http://phenoscanner.medschl.cam.ac.uk) is a curated database of publicly available results from large-scale genetic association studies. The database currently contains over 350 million association results and over 10 million unique genetic variants, mostly single nucleotide polymorphisms.

Our desire is to enable PhenoScanner to be queried directly from the MendelianRandomization package. Currently, PhenoScanner is only available via a web browser. The extract.pheno.csv() function takes the output from the web version of PhenoScanner, and converts this into an MRInput object. PhenoScanner is still under development, and so extract.pheno.csv() should be considered as an experimental function. This function is designed for output from PhenoScanner version 1.1 (Little Miss Sunshine).

The initial steps required to run the extract.pheno.csv() function are:

  1. Open http://www.phenoscanner.medschl.cam.ac.uk/ in your browser.
  2. Input the SNPs in question by uploading a file with the rsIDs or hg19 chrN:pos names (separated by newline).
  3. Choose whether to include proxies or not for SNPs that do not have association data for the given risk factor and/or outcome (and if so, the R-squared threshold value for defining a proxy variant). PhenoScanner currently allows correlation estimates to be taken from the 1000 Genomes or HapMap datasets.
  4. Run the analysis and download the resulting association .csv file.

In order to obtain the relevant SNP summary estimates, run the extract.pheno.csv() function with: - exposure is a character vector giving the name of the risk factor. - pmidE is the PubMed ID of the paper where the association estimates with the exposure were first published. - ancestryE is the ancestry of the participants on whom the association estimates with the exposure were estimated. (For some traits and PubMed IDs, results are given for multiple ancestries.) Usually, ancestry is "European" or "Mixed". - outcome is a character vector giving the name of the outcome. - pmidO is the PubMed ID of the paper where the association estimates with the outcome were first published. - ancestryE is the ancestry of the participants on whom the association estimates with the exposure were estimated. - file is the file path to the PhenoScanner output .csv file. - rsq.proxy is the threshold R-squared value for proxies to be included in the analysis. If a proxy variant is used as part of the analysis, this is reported. The default value of rsq.proxy is 1, meaning that only perfect proxies will be used. - snps is the SNPs that will be included in the analysis. The default value is "all", indicating that all SNPs in the .csv file will be used in the analysis. If only a limited number of SNPs are to be included, snps can be set as a character vector with the named to the SNPs to be included in the analysis.

Two example .csv files from PhenoScanner are included as part of the package: vitD_snps_PhenoScanner.csv (which does not include proxies), and vitD_snps_PhenoScanner_proxies.csv (which includes proxies at an R-squared threshold of 0.6).

path.noproxy <- system.file("extdata", "vitD_snps_PhenoScanner.csv",
  package = "MendelianRandomization")
path.proxies <- system.file("extdata", "vitD_snps_PhenoScanner_proxies.csv",
  package = "MendelianRandomization")


extract.pheno.csv(
  exposure = "log(eGFR creatinine)", pmidE = 26831199, ancestryE = "European",
  outcome = "Tanner stage", pmidO = 24770850, ancestryO = "European", 
  file = path.noproxy)

extract.pheno.csv(
  exposure = "log(eGFR creatinine)", pmidE = 26831199, ancestryE = "European",
  outcome = "Tanner stage", pmidO = 24770850, ancestryO = "European",
  rsq.proxy = 0.6, 
  file = path.proxies)

extract.pheno.csv(
  exposure = "log(eGFR creatinine)", pmidE = 26831199, ancestryE = "European",
  outcome = "Asthma", pmidO = 20860503, ancestryO = "European",
  rsq.proxy = 0.6, 
  file = path.proxies)

The output from the extract.pheno.csv function is an MRInput object, which can be plotted using mr_plot, causal estimates can be obtained using mr_ivw, and so on.

Final note of caution

Particularly with the development of Mendelian randomization with summarized data, two-sample Mendelian randomization (where associations with the risk factor and with the outcome are taken from separate datasets), and bioinformatic tools for obtaining and analysing summarized data (including this package), Mendelian randomization is becoming increasingly accessible as a tool for answering epidemiological questions. This is undoubtably a Good Thing. However, it is important to remember that the difficult part of a Mendelian randomization analysis is not the computational method, but deciding what should go into the analysis: which risk factor, which outcome, and (most importantly) which genetic variants. Hopefully, the availability of these tools will enable less attention to be paid to the mechanics of the analysis, and more attention to these choices. The note of caution is that tools that make Mendelian randomization simple to perform run the risk of encouraging large numbers of speculative analyses to be performed in an unprincipled way. It is important that Mendelian randomization is not performed in a way that avoids critical thought. In releasing this package, the hope is that it will lead to more comprehensive and more reproducible causal inferences from Mendelian randomization, and not simply add more noise to the literature.



Try the MendelianRandomization package in your browser

Any scripts or data that you put into this service are public.

MendelianRandomization documentation built on Aug. 9, 2023, 1:05 a.m.