Getting Started with LOLA

# These settings make the vignette prettier
knitr::opts_chunk$set(results="hold", message=FALSE)

LOLA bioconductor package

Biological data is often compared to reference databases and searching for interesting patterns of enrichment and depletion. For example, gene set analysis has been pivotal for making connections between diverse types of genomic data. However, it suffers from one major limitation: it requires gene-centric data. This is becoming increasingly limiting as our understanding of gene regulation advances. It has become evident that gene expression and chromatin organization are controlled by hundreds of thousands of enhancers and other functional elements, which are often difficult to map to gene symbols. The increasing emphasis on genomic region sets has been propelled by next generation sequencing technology that produces data most naturally analyzed in the context of genomic regions -- as peaks and segmentations. The research community has now established large catalogs of regulatory elements and other genomic features across many cell types. LOLA makes use of these catalogs to perform enrichment analysis of genomic ranges.

Preparing analysis

In this vignette, you'll use small example datasets that come with the LOLA package to get a first look at the most common functions in a LOLA workflow.

You need 3 things to run a LOLA analysis:

  1. A region database.
  2. userSets: Regions of interest (at least 1 set of regions as a GRanges object, or multiple sets of regions of interest as a GRangesList object)
  3. userUniverse: The set of regions tested for inclusion in your sets of regions of interest (a single GRanges object)

Let's load an example regionDB with loadRegionDB(). Here's a small example that comes with LOLA. The database location should point to a folder that contains collection subfolders:

library("LOLA")
dbPath = system.file("extdata", "hg19", package="LOLA")
regionDB = loadRegionDB(dbPath)

The regionDB is an R (list) object that has a few elements:

names(regionDB)

Now with the database loaded, let's load up some sample data (the regions of interest, and the tested universe):

data("sample_input", package="LOLA") # load userSets
data("sample_universe", package="LOLA") # load userUniverse

Now we have a GRanges object called userSets and a GRanges object called userUniverse. This is all we need to run the enrichment calculation.

Run the analysis

runLOLA() will test the overlap between your userSets, and each region set in the regionDB.

locResults = runLOLA(userSets, userUniverse, regionDB, cores=1)

runLOLA tests for pairwise overlap between each user set and each region set in regionDB. It then uses a Fisher's exact test to assess significance of the overlap. The results are a data.table with several columns:

colnames(locResults)
head(locResults)

If you're not familiar with how data.table works in R, it's worth reading some of the documentation of this powerful package. Columns userSet and dbSet are indexes into the respective GRangeList objects, identifying each pairwise comparison. There are a series of columns describing the results of the statistical test, such as pValueLog, logOdds, and the actual values from the contingency table (support is the overlap, and b, c, and d complete the 2x2 table). Rank columns simply rank the tests by pValueLog, logOdds, or support; following these are a series of columns annotating the database regions, depending on how you populated the index table in the regionDB folder.

You can explore these results in R by, for example, ranking with different orders:

locResults[order(support, decreasing=TRUE),]

You can order by one of the rank columns:

locResults[order(maxRnk, decreasing=TRUE),]

And finally, record the results to file like this:

  1. Write out results:
writeCombinedEnrichment(locResults, outFolder= "lolaResults")

By default, this function will write the entire table to a tsv file. I recommend using the includeSplits parameter, which tells the function to also print out additional tables that are subsetted by userSet, so that each region set you test has its own result table. It just makes it a little easier to explore the results.

writeCombinedEnrichment(locResults, outFolder= "lolaResults", includeSplits=TRUE)

Exploring LOLA Results

Say you'd like to know which regions are responsible for the enrichment we see; or, in other words, you'd like to extract the regions that are actually overlapping a particular database. For this, you can use the function extractEnrichmentOverlaps():

oneResult = locResults[2,]
extractEnrichmentOverlaps(oneResult, userSets, regionDB)

Extracting certain region sets from a database

If you have a large database, you may be interested in using the LOLA database format for other projects, or for additional follow-up analysis. In this case, you may be interested in just a single region set within a database, or perhaps just a few of them. LOLA provides a function to extract certain region sets from either a loaded or an unloaded database.

Say you just want an object with regions from the "vistaEnhancers" region set. You can grab it from a loaded database like this:

getRegionSet(regionDB, collections="ucsc_example", filenames="vistaEnhancers.bed")

Or, if you haven't already loaded the database, you can just give the path to the database and LOLA will only load the specific region set(s) you are interested in. This can take more than one filename or collection:

getRegionSet(dbPath, collections="ucsc_example", filenames="vistaEnhancers.bed")

Now that you have a basic idea of what the functions are, you can follow some other vignettes, such as Using LOLA Core, to see how this works on a realistic dataset.



Try the LOLA package in your browser

Any scripts or data that you put into this service are public.

LOLA documentation built on Nov. 8, 2020, 8:23 p.m.