Welcome to the introduction of data management with ORFik experiment. This vignette will walk you through how to work with large amounts of sequencing data effectively in ORFik.
ORFik
is an R package containing various functions for analysis of RiboSeq, RNASeq, RCP-seq, TCP-seq, Chip-seq and Cage data, we advice you to read ORFikOverview vignette, before starting this one.
NGS libraries are becoming more and more numerous. As a bioinformatician / biologist you often work on multi-library experiments, like 6 libraries of RNA-seq and 6 Ribo-seq libraries, split on 3 conditions with 2 replicates each. Then make some plots or statistics. A lot of things can go wrong when you scale up from just 1 library to many, or even to multiple experiments.
Another problem is also that annotations like gff and fasta files combined with the NGS data, must be separately loaded. Making it possible to use wrong annotation for the NGS data.
So to summarize, the ORFik experiment API abstracts what could be done with 1 NGS library and a corresponding organism annotation to the level of multiple libraries and the comparison between them, standardizing ploting, comparisons, loading libraries and much more.
It is an object that simplify and error correct your NGS workflow, creating a single R object that stores and controls all results relevant to a specific experiment. It contains following important parts:
Let's say we have a human experiment, containing annotation files (.gtf and .fasta genome) + Next generation sequencing libraries (NGS-data); RNA-seq, ribo-seq and CAGE.
An example of how to make the experiment will now be shown:
First load ORFik
library(ORFik)
In a normal experiment, you would usually have only bam files from alignment of your experiment to start with (and split this into 3 experiments, 1 for RNA-seq, 1 for Ribo-seq and 1 for CAGE), but to simplify this for you to replicate we use the ORFik example data.
The minimal amount of information you need to make an ORFik experiment is:
# 1. Pick directory (normally a folder with your aligned bam files) NGS.dir <- system.file("extdata/Homo_sapiens_sample", "", package = "ORFik") # 2. .gff/.gtf location txdb <- system.file("extdata/references/homo_sapiens", "Homo_sapiens_dummy.gtf.db", package = "ORFik") # 3. fasta genome location fasta <- system.file("extdata/references/homo_sapiens", "Homo_sapiens_dummy.fasta", package = "ORFik") # 4. Pick an experiment name exper.name <- "ORFik_example_human" list.files(NGS.dir)
Experiments are created by all accepted files from a folder (file extension given by type argument, default: bam, bed, wig, ofst), so remember to keep your experiment folder clean of other NGS libraries of these types not related to the experiment.
# This experiment is intentionally malformed, so we first make only a template: template <- create.experiment(dir = NGS.dir, # directory of the NGS files for the experiment exper.name, # Experiment name txdb = txdb, # gtf / gff / gff.db annotation fa = fasta, # Fasta genome organism = "Homo sapiens", # Scientific naming saveDir = NULL, # Create template instead of ready experiment ) # The experiment contains 3 main parts: # 1. Annotation, organism, general info: data.frame(template)[1:3, ] # 2. NGS data set-up info: data.frame(template)[4:8, 1:5] # 3. NGS File paths: data.frame(template)[4:8, 6]
You see from the template, it excludes files with .bai or .fai, .rdata etc, and only using data of NGS libraries, defined by argument (type).
You can also see it tries to guess library types, stages, replicates, condition etc. It will also try to auto-detect paired end bam files.
Since every NGS file in a experiment must be a unique set of information columns ( there can not be 2 RNA-seq libraries from wildtype that are replicate1 etc), the create.experiment function will intentionally abort if it can not distinguish all the libraries in some way. (Example: It might find 2 files that are categorized as RNA-seq replicate 1, but the condtion: Wild type vs crispr mutant was not auto-detected), so the files would be non-unique.
To fix the things it did not find (a condition not specified, etc), there are 3 ways:
Let's update the template to have correct tissue-fraction in one of the samples.
template$X5[5:6] <- "heart_valve" # <- fix non unique row (tissue fraction is heart valve) df <- read.experiment(template)# read experiment from template
Normally you read experiments saved to disc, if you made only a template, save it by doing:
save.experiment(df, file = "path/to/save/experiment.csv")
You can then load the experiment whenever you need it.
ORFik comes with a example experiment, you can load with:
ORFik.template.experiment()
To see the object, just show it like this:
df
When you print the experiment object, you see here that file paths are hidden, you can access them like this:
filepath(df, type = "default")
ORFik has an extensive syntax for file type variants for your libraries: example is you have both bam, bigwig and ofst files of the same library, used for different purposes.
If you have varying version of libraries, like p-shifted, bam, simplified wig files, you can get file paths to different version with this function, like this:
filepath(df[df$libtype == "RFP", ], type = "pshifted")[2] # RFP = Ribo-seq, Default location for pshifted reads
The filepath function uses a reductive search, so that if you specify type = "bigwig", and you do not have those files, it will point you to the lower level file "ofst". If you don't have those either, it goes to the default file (usually bam format). This ensure you will at least load something, it just depends how fast those files are. It also makes it easy to scale up and generalize you scripts to new experiments.
There are 3 ways to load NGS data, the first one is to load data into an environment. By default all libraries are loaded into .GlobalEnv (global environment), you can check what environment it is output to, by running:
envExp(df) #This will be the environment
The library names are decided by the columns in experiment, to see what the names will be, do:
bamVarName(df) #This will be the names:
Now let's auto-load the libraries to the global environment
outputLibs(df) # With default output.mode = "envir".
To remove the outputted libraries:
# remove.experiments(df)
The second way gives you a list, where the elements are the NGS libraries. There are also two ways of loading the list:
outputLibs(df, output.mode = "envirlist")[1:2] # Save NGS to envir, then return as list
# Check envir, if it exist, list them and return, if not, only return list outputLibs(df, output.mode = "list")[1:2]
The third way is to load manually, more secure, but also more cumbersome.
files <- filepath(df, type = "default") CAGE_loaded_manually <- fimport(files[1])
If you use the auto-loading to environment and you have multiple experiments, it might be a chance of non-unique naming, 2 experiments might have a library called cage. To be sure names are unique, we add the experiment name in the variable name:
df@expInVarName <- TRUE bamVarName(df) #This will be the names:
You see here that the experiment name, "ORFik" is in the variable name If you are only working on one experiment, you do not need to include the name, since there is no possibility of duplicate naming (the experiment class validates all names are unique).
Since we want NGS data names without "ORFik", let's remove the loaded libraries and load them again.
df@expInVarName <- FALSE remove.experiments(df) outputLibs(df)
There is also many function to load specific parts of the annotation:
txdb <- loadTxdb(df) # transcript annotation
Let's say we want to load all leaders, cds and 3' UTRs that are longer than 30. With ORFik experiment this is easy:
txNames <- filterTranscripts(txdb, minFiveUTR = 30, minCDS = 30, minThreeUTR = 30) loadRegions(txdb, parts = c("leaders", "cds", "trailers"), names.keep = txNames)
The regions are now loaded into .GlobalEnv, only keeping transcripts from txNames.
ORFik supports a myriad of plots for experiments. Lets make a plot with coverage over mrna, seperated by 5' UTR, CDS and 3' UTR in one of the ribo-seq libraries from the experiment
transcriptWindow(leaders, cds, trailers, df[9,], BPPARAM = BiocParallel::SerialParam())
If your experiment consists of Ribo-seq, you want to do p-site shifting.
shiftFootprintsByExperiment(df[df$libtype == "RFP",])
P-shifted ribo-seq will automaticly be stored as .ofst (ORFik serialized for R) and .wig (track files for IGV/UCSC) files in a ./pshifted folder, relative to original libraries.
To validate p-shifting, use shiftPlots. Here is an example from Bazzini et al. 2014 I made.
df.baz <- read.experiment("zf_bazzini14_RFP") # <- this exp. does not exist for you shiftPlots(df.baz, title = "Ribo-seq, zebrafish, Bazzini et al. 2014", type = "heatmap")
To see the shifts per library do:
shifts.load(df)
To see the location of pshifted files:
filepath(df[df$libtype == "RFP",], type = "pshifted")
To load p-shifted libraries, you can do:
outputLibs(df[df$libtype == "RFP",], type = "pshifted")
There are also more validation options shown in the Ribo-seq pipeline vignette
Bam files are slow to load, and usually you don't need all the information contained in a bam file.
Usually you convert to bed or bigWig files, but ORFik also support 3 formats for much faster loading and use of data.
From the bam file store these columns as a serialized file (using the insane facebook zstandard compression): seqname, start, cigar, strand, score (number of identical replicates for that read).
This is the fastest format to use, loading time of 10GB Ribo-seq bam file reduced from ~ 5 minutes to ~ 1 second and ~ 15MB size.
convertLibs(df, type = "ofst") # Collapsed
From the bam file store these columns as text file: seqname, start, end (if not all widths are 1), strand, score (number of identical replicates for that read), size (size of cigar Ms according to reference)
The R object loaded from these files are GRanges, since cigar is not needed.
Loading time of 10GB Ribo-seq bam file reduced to ~ 10 seconds and ~ 100MB size.
From the bam file store these columns as text file: seqname, cigar, start, strand, score (number of identical replicates for that read)
The R object loaded from these files are GAlignments or GAlignmentPairs, since cigar is needed.
Loading time of 10GB Ribo-seq bam file reduced to ~ 15 seconds and ~ 200MB size.
ORFik also support a full QC report for post alignment statistics, correlation plots, simplified libraries for plotting, meta coverage, ++.
To optimize the experiment for use in ORFik, always run QCreport, you will then get:
The default QC report:
QCreport(df)
Load Count tables for cds (FPKM normalized):
countTable(df, region = "cds", type = "fpkm")
Load Count tables for all mRNAs (DESeq object):
countTable(df, region = "mrna", type = "deseq")
The statistics are saved in /QC_STATS/ folder relative to the bam files as csv files. To see the statistics, you can do:
QCstats(df)
The plots are saved in /QC_STATS/ folder relative to the bam files, this folder will contain all plots from the QC, either as pdf or png files dependent on what you specify in the QC.
QCfolder(df)
To pshift all ribo-seq files in an experiment, do:
shiftFootprintsByExperiment(df)
In addition there is a QC report for Ribo-seq, with some addition analysis of read lengths and frames. This should only be run on when you have pshifted the reads.
RiboQC.plot(df)
Usually you want to do some operation on multiple data-sets. If ORFik does not include a premade function for what you want, you can make it yourself. If your data is in the format of an ORFik experiment, this operation is simple.
Windows multicore support is very weak. It means it usually has a high overhead for starting multithreaded process (data is copied, not referenced usually). To make sure everything will flow, it is best to set default multicore setting to single core.
To do this do:
BiocParallel::register(BiocParallel::SerialParam(), default = TRUE)
The rule is, the more data that needs to be copied, the slower windows is compared to Unix systems.
Not all functions in ORFik supports abstraction from single library to the experiment syntax. Here 4 ways to run loops for the data for these cases are shown:
library(BiocParallel) # For parallel computation outputLibs(df, type = "pshifted") # Output all libraries, fastest way libs <- bamVarName(df) # <- here are names of the libs that were outputed cds <- loadRegion(df, "cds") # parallel loop bplapply(libs, FUN = function(lib, cds) { return(entropy(cds, get(lib))) }, cds = cds)
# Output all libraries, fastest way cds <- loadRegion(df, "cds") # parallel loop bplapply(outputLibs(df, type = "pshifted", output.mode = "list"), FUN = function(lib, cds) { return(entropy(cds, lib)) }, cds = cds)
files <- filepath(df, type = "pshifted") cds <- loadRegion(df, "cds") # parallel loop res <- bplapply(files, FUN = function(file, cds) { return(entropy(cds, fimport(file))) }, cds = cds)
files <- filepath(df, type = "pshifted") cds <- loadRegion(df, "cds") # Single thread loop lapply(files, FUN = function(file, cds) { return(entropy(cds, fimport(file))) }, cds = cds)
Since the output from the above loops will output lists, a very fast conversion to data.table can be done with:
library(data.table) outputLibs(df, type = "pshifted") libs <- bamVarName(df) # <- here are names of the libs that were outputed cds <- loadRegion(df, "cds") # parallel loop res <- bplapply(libs, FUN = function(lib, cds) { return(entropy(cds, get(lib))) }, cds = cds) res.by.columns <- copy(res) # data.table copies default by reference # Add some names and convert names(res.by.columns) <- libs data.table::setDT(res.by.columns) # Will give 1 column per library res.by.columns # Now by columns
To merge row-wise do:
res.by.rows <- copy(res) # Add some names and convert names(res.by.rows) <- libs res.by.rows <- rbindlist(res.by.rows) # Will bind rows per library res.by.columns # now melted row-wise
ORFik contains a whole API for using the ORFik.experiment S4 class to simplify coding over experiments. More examples of use shown in documentation and in the Annotation_Alignment and Ribo-seq pipeline vignettes.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.