knitr::opts_chunk$set(echo = TRUE,comment = "#",fig.width = 5, fig.height = 4,fig.align = "center", eval = TRUE)
In the introductory
mashr vignettes we assumed that the
data were small enough that it was convenient to read them all in
and do all the analyses on the same data.
In larger applications, particularly eQTL studies, it can be more convenient to do different parts of the analyses on subsets of the tests. Specifically, if you have millions of tests in dozens of conditions, it might be helpful to consider subsets of these millions of tests at any one time. Here we illustrate this idea.
Our suggested workflow is to extract (at least) two subsets of tests from your complete data set:
Results from a subset of "strong" tests corresponding to stronger effects in your study. For example, these tests might have been identified by taking the "top" eQTL in each gene based on univariate test results, or by some other approach such as a simple meta-analysis.
Results from a random subset of all tests. It is important
that these be an unbiased representation of all the tests you are considering,
including null and non-null tests, because
mashr uses these tests
to learn about the amount of signal in the data, and to "correct" estimates
for the fact that many tests are null (analagous to a kind of multiple
We will call the data from these two sets of tests
To give some sense of the potential appropriate sizes of these
datasets: in our eQTL application in Urbut et al, the
strong data contained about 16k tests (the top eQTL per gene), and
random data we used 20k randomly-selected tests. (If you
suspect true effects are very sparse then you might want to increase
the size of the random subset, say to 200k).
The basic analysis strategy is now:
Learn correlation structure among null tests using
Learn data-driven covariance matrices using
Fit the mashr model to the
random tests, to learn the mixture weights on all the different covariance matrices and scaling coefficients.
Compute posterior summaries on the
strong tests, using the model fit from step 2. (At this stage
you could actually compute posterior summaries for any sets of tests you like.
For example you could read in all your tests in small batches and compute
posterior summaries in batches. But for illustration we will just do
it on the
First we simulate some data to illustrate the ideas. To make
this convenient to run we simulate a small data. And we identify
the strong hits using
mash_1by1. But in practice you may want to use methods outside of R to extract the matrices of data corresponding
to strong and random tests, and then read them in as you need them. For example, see here for scripts we use for processing fastQTL output.
library(ashr) library(mashr) set.seed(1) simdata = simple_sims(10000,5,1) # simulates data on 40k tests # identify a subset of strong tests m.1by1 = mash_1by1(mash_set_data(simdata$Bhat,simdata$Shat)) strong.subset = get_significant_results(m.1by1,0.05) # identify a random subset of 5000 tests random.subset = sample(1:nrow(simdata$Bhat),5000)
We estimate the correlation structure in the null tests from the
random data (not the
strong data because
they will not necessarily contain any null tests).
To do this we set up a temporary data object
from the random tests and use
estimate_null_correlation_simple as in this vignette.
data.temp = mash_set_data(simdata$Bhat[random.subset,],simdata$Shat[random.subset,]) Vhat = estimate_null_correlation_simple(data.temp) rm(data.temp)
Now we can set up our main data objects with this correlation structure in place:
data.random = mash_set_data(simdata$Bhat[random.subset,],simdata$Shat[random.subset,],V=Vhat) data.strong = mash_set_data(simdata$Bhat[strong.subset,],simdata$Shat[strong.subset,], V=Vhat)
Now we use the strong tests to set up data-driven covariances.
U.pca = cov_pca(data.strong,5) U.ed = cov_ed(data.strong, U.pca)
Now we fit mash to the random tests using both data-driven and canonical covariances. (Remember the Crucial Rule! We have to fit using a random
set of tests, and not a dataset that is enriched for strong tests.)
outputlevel=1 option means that it will not compute posterior summaries for these tests (which saves time).
U.c = cov_canonical(data.random) m = mash(data.random, Ulist = c(U.ed,U.c), outputlevel = 1)
Now we can compute posterior summaries etc for any subset of tests using the above mash fit. Here we do this for the
strong tests. We do
this using the same
mash function as above, but we
specify to use the fit from the previous run of mash by specifying
g=get_fitted_g(m), fixg=TRUE. (In
mash the parameter
g is used to denote the mixture model which we learned above.)
m2 = mash(data.strong, g=get_fitted_g(m), fixg=TRUE) head(get_lfsr(m2))
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.