BiocStyle::markdown() options(width=100, max.print=1000) knitr::opts_chunk$set( eval=as.logical(Sys.getenv("KNITR_EVAL", "TRUE")), cache=as.logical(Sys.getenv("KNITR_CACHE", "TRUE")))
suppressPackageStartupMessages({ library(systemPipeR) library(systemPipeRdata) library(BiocParallel) library(Biostrings) library(Rsamtools) library(GenomicRanges) library(ggplot2) library(GenomicAlignments) library(ShortRead) })
Note: the most recent version of this tutorial can be found here.
systemPipeR
provides utilities for building \textit{end-to-end} analysis workflows with automated report generation for next generation sequence (NGS) applications such as RNA-Seq, ChIP-Seq, VAR-Seq and many others [@Girke2014-oy]. An important feature is support for running command-line software, such as NGS aligners, on both single machines or compute clusters. This includes both interactive job submissions or batch submissions to queuing systems of clusters. For instance, systemPipeR
can be used with most command-line aligners such as BWA
[@Li2013-oy; @Li2009-oc], TopHat2
[@Kim2013-vg] and Bowtie2
[@Langmead2012-bs], as well as the R-based NGS aligners Rsubread
[@Liao2013-bn] and gsnap (gmapR)
[@Wu2010-iq]. Efficient handling of complex sample sets and experimental designs is facilitated by a well-defined sample annotation infrastructure which improves reproducibility and user-friendliness of many typical analysis workflows in the NGS area [@Lawrence2013-kt].
Motivation and advantages of sytemPipeR
environment:
A central concept for designing workflows within the sytemPipeR
environment is the use of sample management containers called SYSargs
(see Figure 1). Instances of this S4 object class are constructed by the systemArgs
function from two simple tablular files: a targets
file and a param
file. The latter is optional for workflow steps lacking command-line software. Typically, a SYSargs
instance stores all sample-level inputs as well as the paths to the corresponding outputs generated by command-line- or R-based software generating sample-level output files, such as read preprocessors (trimmed/filtered FASTQ files), aligners (SAM/BAM files), variant callers (VCF/BCF files) or peak callers (BED/WIG files). Each sample level input/outfile operation uses its own SYSargs
instance. The outpaths of SYSargs
usually define the sample inputs for the next SYSargs
instance. This connectivity is established by writing the outpaths with the writeTargetsout
function to a new targets
file that serves as input to the next systemArgs
call. Typically, the user has to provide only the initial targets
file. All downstream targets
files are generated automatically. By chaining several SYSargs
steps together one can construct complex workflows involving many sample-level input/output file operations with any combinaton of command-line or R-based software.
The intended way of running sytemPipeR
workflows is via *.Rnw
or *.Rmd
files, which can be executed either line-wise in interactive mode or with a single command from R or the command-line using a Makefile
. This way comprehensive and reproducible analysis reports in PDF or HTML format can be generated in a fully automated manner by making use of the highly functional reporting utilities available for R. Templates for setting up custom project reports are provided as *.Rnw
files in the vignettes
subdirectory of this package. The corresponding PDFs of these report templates are linked here: systemPipeRNAseq
, systemPipeChIPseq
and systemPipeVARseq
. To work with *.Rnw
or *.Rmd
files efficiently, basic knowledge of Sweave
or knitr
and Latex
or R Markdown v2
is required.
Relevant workflow parameter files:
targets.txt
: initial one provided by user; downstream targets_*.txt
files are generated automatically*.param
: defines parameter for input/output file operations, e.g. trim.param
, bwa.param
, vartools.parm
, ...*_run.sh
: optional bash script, e.g.: gatk_run.sh
.BatchJobs
: defines type of scheduler for BatchJobs
*.tmpl
: specifies parameters of scheduler used by a system, e.g. Torque, SGE, StarCluster, Slurm, etc.The R software for running systemPipeR
and systemPipeRdata
can be downloaded from CRAN. The systemPipeR
environment can be installed from R using the install command:
if (!requireNamespace("BiocManager", quietly=TRUE)) install.packages("BiocManager") BiocManager::install("systemPipeR") # Installs systemPipeR from Bioconductor BiocManager::install("tgirke/systemPipeRdata", build_vignettes=TRUE, dependencies=TRUE) # From github
library("systemPipeR") # Loads the package library(help="systemPipeR") # Lists package info vignette("systemPipeR") # Opens vignette
The mini sample FASTQ files used by this overview vignette as well as the associated workflow reporting vignettes can be downloaded from here
. Preferentially, they should be loaded from the systemPipeRdata
package as shown below. The chosen data set SRP010938
contains 18 paired-end (PE) read sets from Arabidposis thaliana [@Howard2013-fq]. To minimize processing time during testing, each FASTQ file has been subsetted to 90,000-100,000 randomly sampled PE reads that map to the first 100,000 nucleotides of each chromosome of the A. thalina genome. The corresponding reference genome sequence (FASTA) and its GFF annotion files (provided in the same download) have been truncated accordingly. This way the entire test sample data set requires less than 200MB disk storage space. A PE read set has been chosen for this test data set for flexibility, because it can be used for testing both types of analysis routines requiring either SE (single end) reads or PE reads.
Load one of the available NGS workflow templates into your current working directory (here for rnaseq).
library(systemPipeRdata) genWorkenvir(workflow="rnaseq") setwd("rnaseq")
The working environment of the sample data contains the following preconfigured directory structure:
The sample workflows provided by the package are based on the above directory structure, where directory names are indicated in grey. Users can change this structure as needed, but need to adjust the code in their workflows accordingly.
targets
fileThe targets
file defines all input files (e.g. FASTQ, BAM, BCF) and sample comparisons of an analysis workflow. The following shows the format of a sample targets
file provided by this package. In a target file with a single type of input files, here FASTQ files of single end (SE) reads, the first three columns are mandatory including their column names, while it is four mandatory columns for FASTQ files for PE reads. All subsequent columns are optional and any number of additional columns can be added as needed.
targets
file for single end (PE) sampleslibrary(systemPipeR) read.delim("targets.txt", comment.char = "#")
targets
file for paired end (PE) samplesread.delim("targetsPE.txt", comment.char = "#")[1:2,1:6]
Sample comparisons are defined in the header lines of the targets
file starting with '# <CMP>
'.
readLines("targets.txt")[1:4]
The function readComp
imports the comparison and stores them in a list
. Alternatively, readComp
can obtain the comparison information from the corresponding SYSargs
object (see below).
Note, the header lines are optional. They are mainly useful for controlling comparative analysis according to certain biological expectations, such as simple pairwise comparisons in RNA-Seq experiments.
readComp(file="targets.txt", format="vector", delim="-")
param
file and SYSargs
containerThe param
file defines the parameters of the command-line software. The following shows the format of a sample param
file provided by this package.
read.delim("param/tophat.param", comment.char = "#")
The systemArgs
function imports the definitions of both the param
file and the targets
file, and stores all relevant information as SYSargs
object. To run the pipeline without command-line software, one can assign NULL
to sysma
instead of a param
file. In addition, one can start the systemPipeR
workflow with pre-generated BAM files by providing a targets file where the FileName
column gives the paths to the BAM files and sysma
is assigned NULL
.
args <- suppressWarnings(systemArgs(sysma="param/tophat.param", mytargets="targets.txt")) args
Several accessor functions are available that are named after the slot names of the SYSargs
object class.
names(args) modules(args) cores(args) outpaths(args)[1] sysargs(args)[1]
Load package
library(systemPipeR)
Construct SYSargs
object from param
and targets
files.
args <- systemArgs(sysma="param/trim.param", mytargets="targets.txt")
The function preprocessReads
allows to apply predefined or custom
read preprocessing functions to all FASTQ files referenced in a
SYSargs
container, such as quality filtering or adaptor trimming
routines. The paths to the resulting output FASTQ files are stored in the
outpaths
slot of the SYSargs
object. Internally,
preprocessReads
uses the FastqStreamer
function from
the ShortRead
package to stream through large FASTQ files in a
memory-efficient manner. The following example performs adaptor trimming with
the trimLRPatterns
function from the Biostrings
package.
After the trimming step a new targets file is generated (here
targets_trim.txt
) containing the paths to the trimmed FASTQ files.
The new targets file can be used for the next workflow step with an updated
SYSargs
instance, \textit{e.g.} running the NGS alignments using the
trimmed FASTQ files.
preprocessReads(args=args, Fct="trimLRPatterns(Rpattern='GCCCGGGTAA', subject=fq)", batchsize=100000, overwrite=TRUE, compress=TRUE) writeTargetsout(x=args, file="targets_trim.txt")
The following example shows how one can design a custom read preprocessing function
using utilities provided by the ShortRead
package, and then run it
in batch mode with the 'preprocessReads' function (here on paired-end reads).
args <- systemArgs(sysma="param/trimPE.param", mytargets="targetsPE.txt") filterFct <- function(fq, cutoff=20, Nexceptions=0) { qcount <- rowSums(as(quality(fq), "matrix") <= cutoff) fq[qcount <= Nexceptions] # Retains reads where Phred scores are >= cutoff with N exceptions } preprocessReads(args=args, Fct="filterFct(fq, cutoff=20, Nexceptions=0)", batchsize=100000) writeTargetsout(x=args, file="targets_PEtrim.txt")
The following seeFastq
and seeFastqPlot
functions generate and plot a series of
useful quality statistics for a set of FASTQ files including per cycle quality
box plots, base proportions, base-level quality trends, relative k-mer
diversity, length and occurrence distribution of reads, number of reads above
quality cutoffs and mean quality distribution.
fqlist <- seeFastq(fastq=infile1(args), batchsize=10000, klength=8) pdf("./results/fastqReport.pdf", height=18, width=4*length(fqlist)) seeFastqPlot(fqlist) dev.off()
Parallelization of QC report on single machine with multiple cores
args <- systemArgs(sysma="param/tophat.param", mytargets="targets.txt") f <- function(x) seeFastq(fastq=infile1(args)[x], batchsize=100000, klength=8) fqlist <- bplapply(seq(along=args), f, BPPARAM = MulticoreParam(workers=8)) seeFastqPlot(unlist(fqlist, recursive=FALSE))
Parallelization of QC report via scheduler (e.g. Torque) across several compute nodes
library(BiocParallel); library(BatchJobs) f <- function(x) { library(systemPipeR) args <- systemArgs(sysma="param/tophat.param", mytargets="targets.txt") seeFastq(fastq=infile1(args)[x], batchsize=100000, klength=8) } funs <- makeClusterFunctionsTorque("torque.tmpl") param <- BatchJobsParam(length(args), resources=list(walltime="20:00:00", nodes="1:ppn=1", memory="6gb"), cluster.functions=funs) register(param) fqlist <- bplapply(seq(along=args), f) seeFastqPlot(unlist(fqlist, recursive=FALSE))
Tophat2
Build Bowtie2
index.
args <- systemArgs(sysma="param/tophat.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available system("bowtie2-build ./data/tair10.fasta ./data/tair10.fasta")
Execute SYSargs
on a single machine without submitting to a queuing system of a compute cluster. This way the input FASTQ files will be processed sequentially. If available, multiple CPU cores can be used for processing each file. The number of CPU cores (here 4) to use for each process is defined in the *.param
file. With cores(args)
one can return this value from the SYSargs
object. Note, if a module system is not installed or used, then the corresponding *.param
file needs to be edited accordingly by either providing an empty field in the line(s) starting with module
or by deleting these lines.
bampaths <- runCommandline(args=args)
Alternatively, the computation can be greatly accelerated by processing many files in parallel using several compute nodes of a cluster, where a scheduling/queuing system is used for load balancing. To avoid over-subscription of CPU cores on the compute nodes, the value from cores(args)
is passed on to the submission command, here nodes
in the resources
list object. The number of independent parallel cluster processes is defined under the Njobs
argument. The following example will run 18 processes in parallel using for each 4 CPU cores. If the resources available on a cluster allow to run all 18 processes at the same time then the shown sample submission will utilize in total 72 CPU cores. Note, runCluster
can be used with most queueing systems as it is based on utilities from the BatchJobs
package which supports the use of template files *.tmpl
) for defining the run parameters of different schedulers. To run the following code, one needs to have both a conf file (see .BatchJob
samples here) and a template file (see *.tmpl
samples here) for the queueing available on a system. The following example uses the sample conf and template files for the Torque scheduler provided by this package.
resources <- list(walltime="20:00:00", nodes=paste0("1:ppn=", cores(args)), memory="10gb") reg <- clusterRun(args, conffile=".BatchJobs.R", template="torque.tmpl", Njobs=18, runid="01", resourceList=resources) waitForJobs(reg)
Useful commands for monitoring progress of submitted jobs
showStatus(reg) file.exists(outpaths(args)) sapply(1:length(args), function(x) loadResult(reg, x)) # Works after job completion
Generate table of read and alignment counts for all samples.
read_statsDF <- alignStats(args) write.table(read_statsDF, "results/alignStats.xls", row.names=FALSE, quote=FALSE, sep="\t")
The following shows the first four lines of the sample alignment stats file provided by the systemPipeR
package. For simplicity the number of PE reads is multiplied here by 2 to approximate proper alignment frequencies where each read in a pair is counted.
read.table(system.file("extdata", "alignStats.xls", package="systemPipeR"), header=TRUE)[1:4,]
Parallelization of read/alignment stats on single machine with multiple cores
f <- function(x) alignStats(args[x]) read_statsList <- bplapply(seq(along=args), f, BPPARAM = MulticoreParam(workers=8)) read_statsDF <- do.call("rbind", read_statsList)
Parallelization of read/alignment stats via scheduler (e.g. Torque) across several compute nodes
library(BiocParallel); library(BatchJobs) f <- function(x) { library(systemPipeR) args <- systemArgs(sysma="tophat.param", mytargets="targets.txt") alignStats(args[x]) } funs <- makeClusterFunctionsTorque("torque.tmpl") param <- BatchJobsParam(length(args), resources=list(walltime="20:00:00", nodes="1:ppn=1", memory="6gb"), cluster.functions=funs) register(param) read_statsList <- bplapply(seq(along=args), f) read_statsDF <- do.call("rbind", read_statsList)
The genome browser IGV supports reading of indexed/sorted BAM files via web URLs. This way it can be avoided to create unnecessary copies of these large files. To enable this approach, an HTML directory with http access needs to be available in the user account (e.g. home/publichtml
) of a system. If this is not the case then the BAM files need to be moved or copied to the system where IGV runs. In the following, htmldir
defines the path to the HTML directory with http access where the symbolic links to the BAM files will be stored. The corresponding URLs will be written to a text file specified under the _urlfile
_ argument.
symLink2bam(sysargs=args, htmldir=c("~/.html/", "somedir/"), urlbase="http://myserver.edu/~username/", urlfile="IGVurl.txt")
Bowtie2
(e.g. for miRNA profiling)The following example runs Bowtie2
as a single process without submitting it to a cluster.
args <- systemArgs(sysma="bowtieSE.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available bampaths <- runCommandline(args=args)
Alternatively, submit the job to compute nodes.
resources <- list(walltime="20:00:00", nodes=paste0("1:ppn=", cores(args)), memory="10gb") reg <- clusterRun(args, conffile=".BatchJobs.R", template="torque.tmpl", Njobs=18, runid="01", resourceList=resources) waitForJobs(reg)
The following example runs BWA-MEM as a single process without submitting it to a cluster.
args <- systemArgs(sysma="param/bwa.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available system("bwa index -a bwtsw ./data/tair10.fasta") # Indexes reference genome bampaths <- runCommandline(args=args[1:2])
The following example shows how one can use within the \Rpackage{systemPipeR} environment the R-based aligner \Rpackage{Rsubread} or other R-based functions that read from input files and write to output files.
library(Rsubread) args <- systemArgs(sysma="param/rsubread.param", mytargets="targets.txt") buildindex(basename=reference(args), reference=reference(args)) # Build indexed reference genome align(index=reference(args), readfile1=infile1(args)[1:4], input_format="FASTQ", output_file=outfile1(args)[1:4], output_format="SAM", nthreads=8, indels=1, TH1=2) for(i in seq(along=outfile1(args))) asBam(file=outfile1(args)[i], destination=gsub(".sam", "", outfile1(args)[i]), overwrite=TRUE, indexDestination=TRUE)
gsnap
Another R-based short read aligner is gsnap
from the gmapR
package [@Wu2010-iq].
The code sample below introduces how to run this aligner on multiple nodes of a compute cluster.
library(gmapR); library(BiocParallel); library(BatchJobs) args <- systemArgs(sysma="param/gsnap.param", mytargets="targetsPE.txt") gmapGenome <- GmapGenome(reference(args), directory="data", name="gmap_tair10chr/", create=TRUE) f <- function(x) { library(gmapR); library(systemPipeR) args <- systemArgs(sysma="gsnap.param", mytargets="targetsPE.txt") gmapGenome <- GmapGenome(reference(args), directory="data", name="gmap_tair10chr/", create=FALSE) p <- GsnapParam(genome=gmapGenome, unique_only=TRUE, molecule="DNA", max_mismatches=3) o <- gsnap(input_a=infile1(args)[x], input_b=infile2(args)[x], params=p, output=outfile1(args)[x]) } funs <- makeClusterFunctionsTorque("torque.tmpl") param <- BatchJobsParam(length(args), resources=list(walltime="20:00:00", nodes="1:ppn=1", memory="6gb"), cluster.functions=funs) register(param) d <- bplapply(seq(along=args), f)
Load one of the available NGS workflows into your current working directory (here for varseq).
setwd("../") genWorkenvir(workflow="varseq") setwd("varseq")
Next, run the chosen sample workflow systemPipeVARseq_single
(PDF, Rnw) by executing from the command-line make -B
within the varseq
directory. Alternatively, one can run the code from the provided *.Rnw
template file from within R interactively. Much more detailed information is available in systemPipeR
's overview and workflow vignettes available here.
Workflow includes following steps:
*
*
, BCFtools*
*
not evaluated because required software is not available on all systems.
This demonstration will run the above VAR-Seq workflow in parallel on multiple computer nodes of a HPC system using Torque and Maui as scheduler and load balancing system. The workflow template provided for this step is called systemPipeVARseq.Rnw
(PDF, Rnw) .
sessionInfo()
sessionInfo()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.