Using scran to analyze single-cell RNA-seq data

require(knitr)
opts_chunk$set(error=FALSE, message=FALSE, warning=FALSE)
library(scran)
library(BiocParallel)
register(SerialParam()) # avoid problems with fastMNN parallelization.
set.seed(100)

Introduction

Single-cell RNA sequencing (scRNA-seq) is a widely used technique for profiling gene expression in individual cells. This allows molecular biology to be studied at a resolution that cannot be matched by bulk sequencing of cell populations. The r Biocpkg("scran") package implements methods to perform low-level processing of scRNA-seq data, including cell cycle phase assignment, scaling normalization, variance modelling and testing for corrrelated genes. This vignette provides brief descriptions of these methods and some toy examples to demonstrate their use.

Note: A more comprehensive description of the use of r Biocpkg("scran") (along with other packages) in a scRNA-seq analysis workflow is available at https://osca.bioconductor.org.

Setting up the data

We start off with a count matrix where each row is a gene and each column is a cell. These can be obtained by mapping read sequences to a reference genome, and then counting the number of reads mapped to the exons of each gene. (See, for example, the r Biocpkg("Rsubread") package to do both of these tasks.) Alternatively, pseudo-alignment methods can be used to quantify the abundance of each transcript in each cell. For simplicity, we will pull out an existing dataset from the r Biocpkg("scRNAseq") package.

library(scRNAseq)
sce <- GrunPancreasData()
sce

This particular dataset is taken from a study of the human pancreas with the CEL-seq protocol [@grun2016denovo]. It is provided as a SingleCellExperiment object (from the r Biocpkg("SingleCellExperiment") package), which contains the raw data and various annotations. We perform some cursory quality control to remove cells with low total counts or high spike-in percentages:

library(scuttle)
qcstats <- perCellQCMetrics(sce)
qcfilter <- quickPerCellQC(qcstats, percent_subsets="altexps_ERCC_percent")
sce <- sce[,!qcfilter$discard]
summary(qcfilter$discard)

Normalizing cell-specific biases

Cell-specific biases are normalized using the computeSumFactors method, which implements the deconvolution strategy for scaling normalization [@lun2016pooling]. This computes size factors that are used to scale the counts in each cell. The assumption is that most genes are not differentially expressed (DE) between cells, such that any differences in expression across the majority of genes represents some technical bias that should be removed.

library(scran)
clusters <- quickCluster(sce)
sce <- computeSumFactors(sce, clusters=clusters)
summary(sizeFactors(sce))

For larger data sets, clustering should be performed with the quickCluster function before normalization. Briefly, cells are grouped into clusters of similar expression; normalization is applied within each cluster to compute size factors for each cell; and the factors are rescaled by normalization between clusters. This reduces the risk of violating the above assumption when many genes are DE between clusters in a heterogeneous population. We also assume that quality control on the cells has already been performed, as low-quality cells with few expressed genes can often have negative size factor estimates.

An alternative approach is to normalize based on the spike-in counts [@lun2017assessing]. The idea is that the same quantity of spike-in RNA was added to each cell prior to library preparation. Size factors are computed to scale the counts such that the total coverage of the spike-in transcripts is equal across cells. The main practical difference is that spike-in normalization preserves differences in total RNA content between cells, whereas computeSumFactors and other non-DE methods do not.

sce2 <- computeSpikeFactors(sce, "ERCC")
summary(sizeFactors(sce2))

Regardless of which size factor calculation we choose, we will use the size factors to compute normalized expression values using logNormCounts() from r Biocpkg("scuttle"). Each expression value can be interpreted as a log-transformed "normalized count", and can be used in downstream applications like clustering or dimensionality reduction.

sce <- logNormCounts(sce)

Variance modelling

We identify genes that drive biological heterogeneity in the data set by modelling the per-gene variance. By only using a subset of highly variable genes in downstream analyses like clustering, we improve resolution of biological structure by removing uninteresting genes driven by technical noise. We decompose the total variance of each gene into its biological and technical components by fitting a trend to the endogenous variances [@lun2016step]. The fitted value of the trend is used as an estimate of the technical component, and we subtract the fitted value from the total variance to obtain the biological component for each gene.

dec <- modelGeneVar(sce)
plot(dec$mean, dec$total, xlab="Mean log-expression", ylab="Variance")
curve(metadata(dec)$trend(x), col="blue", add=TRUE)

If we have spike-ins, we can use them to fit the trend instead. This provides a more direct estimate of the technical variance and avoids assuming that most genes do not exhibit biological variaility.

dec2 <- modelGeneVarWithSpikes(sce, 'ERCC')
plot(dec2$mean, dec2$total, xlab="Mean log-expression", ylab="Variance")
points(metadata(dec2)$mean, metadata(dec2)$var, col="red")
curve(metadata(dec2)$trend(x), col="blue", add=TRUE)

If we have some uninteresting factors of variation, we can block on these using block=. This will perform the trend fitting and decomposition within each block before combining the statistics across blocks for output. Statistics for each individual block can also be extracted for further inspection.

dec3 <- modelGeneVar(sce, block=sce$donor)
per.block <- dec3$per.block
par(mfrow=c(3, 2))
for (i in seq_along(per.block)) {
    decX <- per.block[[i]]
    plot(decX$mean, decX$total, xlab="Mean log-expression", 
        ylab="Variance", main=names(per.block)[i])
    curve(metadata(decX)$trend(x), col="blue", add=TRUE)
}

We can then extract some top genes for use in downstream procedures using the getTopHVGs() function. A variety of different strategies can be used to define a subset of interesting genes:

# Get the top 10% of genes.
top.hvgs <- getTopHVGs(dec, prop=0.1)

# Get the top 2000 genes.
top.hvgs2 <- getTopHVGs(dec, n=2000)

# Get all genes with positive biological components.
top.hvgs3 <- getTopHVGs(dec, var.threshold=0)

# Get all genes with FDR below 5%.
top.hvgs4 <- getTopHVGs(dec, fdr.threshold=0.05)

The selected subset of genes can then be passed to the subset.row argument (or equivalent) in downstream steps. This process is demonstrated below for the PCA step, using the runPCA() function from the r Biocpkg("scater") package.

# Running the PCA with the 10% of HVGs.
library(scater)
sce <- runPCA(sce, subset_row=top.hvgs)
reducedDimNames(sce)

Automated PC choice

Principal components analysis is commonly performed to denoise and compact the data prior to downstream analysis. A common question is how many PCs to retain; more PCs will capture more biological signal at the cost of retaining more noise and requiring more computational work. One approach to choosing the number of PCs is to use the technical component estimates to determine the proportion of variance that should be retained. This is implemented in denoisePCA(), which takes the estimates returned by modelGeneVar() or friends. (For greater accuracy, we use the fit with the spikes; we also subset to only the top HVGs to remove noise.)

sced <- denoisePCA(sce, dec2, subset.row=getTopHVGs(dec2, prop=0.1))
ncol(reducedDim(sced, "PCA"))

Another approach is based on the assumption that each subpopulation should be separated from each other on a different axis of variation. Thus, we choose the number of PCs that is not less than the number of subpopulations (which are unknown, of course, so we use the number of clusters as a proxy). It is then a simple matter to subset the dimensionality reduction result to the desired number of PCs.

output <- getClusteredPCs(reducedDim(sce))
npcs <- metadata(output)$chosen
reducedDim(sce, "PCAsub") <- reducedDim(sce, "PCA")[,1:npcs,drop=FALSE]
npcs

Graph-based clustering

Clustering of scRNA-seq data is commonly performed with graph-based methods due to their relative scalability and robustness. r Biocpkg('scran') provides several graph construction methods based on shared nearest neighbors [@xu2015identification] through the buildSNNGraph() function. This is most commonly generated from the selected PCs, after which community detection methods from the r CRANpkg("igraph") package can be used to explicitly identify clusters.

# In this case, using the PCs that we chose from getClusteredPCs().
g <- buildSNNGraph(sce, use.dimred="PCAsub")
cluster <- igraph::cluster_walktrap(g)$membership

# Assigning to the 'colLabels' of the 'sce'.
colLabels(sce) <- factor(cluster)
table(colLabels(sce))

By default, buildSNNGraph() uses the mode of shared neighbor weighting described by @xu2015identification, but other weighting methods (e.g., the Jaccard index) are also available by setting type=. An unweighted $k$-nearest neighbor graph can also be constructed with buildKNNGraph().

We can then use methods from r Biocpkg("scater") to visualize this clustering on a $t$-SNE plot. Note that colLabels()<- will just add the cluster assignments to the "label" field of the colData; however, any name can be used as long as downstream functions are adjusted appropriately.

sce <- runTSNE(sce, dimred="PCAsub")
plotTSNE(sce, colour_by="label", text_by="label")

For graph-based methods, another diagnostic is to examine the ratio of observed to expected edge weights for each pair of clusters (closely related to the modularity score used in many cluster_* functions). We would usually expect to see high observed weights between cells in the same cluster with minimal weights between clusters, indicating that the clusters are well-separated. Off-diagonal entries indicate that some clusters are closely related, which is useful to know for checking that they are annotated consistently.

ratio <- clusterModularity(g, cluster, as.ratio=TRUE)

library(pheatmap)
pheatmap(log10(ratio+1), cluster_cols=FALSE, cluster_rows=FALSE,
    col=rev(heat.colors(100)))

A more general diagnostic involves bootstrapping to determine the stability of the partitions between clusters. Given a clustering function, the bootstrapCluster() function uses bootstrapping to compute the co-assignment probability for each pair of original clusters, i.e., the probability that one randomly chosen cell from each cluster is assigned to the same cluster in the bootstrap replicate . Larger probabilities indicate that the separation between those clusters is unstable to the extent that it is sensitive to sampling noise, and thus should not be used for downstream inferences.

ass.prob <- bootstrapCluster(sce, FUN=function(x) {
    g <- buildSNNGraph(x, use.dimred="PCAsub")
    igraph::cluster_walktrap(g)$membership
}, clusters=sce$cluster)

pheatmap(ass.prob, cluster_cols=FALSE, cluster_rows=FALSE,
    col=colorRampPalette(c("white", "blue"))(100))

If necessary, further subclustering can be performed conveniently using the quickSubCluster() wrapper function. This splits the input SingleCellExperiment into several smaller objects containing cells from each cluster and performs another round of clustering within that cluster, using a freshly identified set of HVGs to improve resolution for internal structure.

subout <- quickSubCluster(sce, groups=colLabels(sce))
table(metadata(subout)$subcluster) # formatted as '<parent>.<subcluster>'

Identifying marker genes

The findMarkers() wrapper function will perform some simple differential expression tests between pairs of clusters to identify potential marker genes for each cluster. For each cluster, we perform $t$-tests to identify genes that are DE in each cluster compared to at least one other cluster. All pairwise tests are combined into a single ranking by simply taking the top genes from each pairwise comparison. For example, if we take all genes with Top <= 5, this is equivalent to the union of the top 5 genes from each pairwise comparison. This aims to provide a set of genes that is guaranteed to be able to distinguish the chosen cluster from all others.

# Uses clustering information from 'colLabels(sce)' by default:
markers <- findMarkers(sce)
markers[[1]][,1:3]

We can modify the tests by passing a variety of arguments to findMarkers(). For example, the code below will perform Wilcoxon tests instead of $t$-tests; only identify genes that are upregulated in the target cluster compared to each other cluster; and require a minimum log~2~-fold change of 1 to be considered significant.

wmarkers <- findMarkers(sce, test.type="wilcox", direction="up", lfc=1)
wmarkers[[1]][,1:3]

We can also modify how the statistics are combined across pairwise comparisons. Setting pval.type="all" requires a gene to be DE between each cluster and every other cluster (rather than any other cluster, as is the default with pval.type="any"). This is a more stringent definition that can yield a more focused set of markers but may also fail to detect any markers in the presence of overclustering.

markers <- findMarkers(sce, pval.type="all")
markers[[1]][,1:2]

Detecting correlated genes

Another useful procedure is to identify significant pairwise correlations between pairs of HVGs. The idea is to distinguish between HVGs caused by random stochasticity, and those that are driving systematic heterogeneity, e.g., between subpopulations. Correlations are computed in the correlatePairs method using a slightly modified version of Spearman's rho. Testing is performed against the null hypothesis of independent genes, using a permutation method in correlateNull to construct a null distribution.

# Using the first 200 HVs, which are the most interesting anyway.
# Also turning down the number of iterations for speed.
of.interest <- top.hvgs[1:200]
cor.pairs <- correlatePairs(sce, subset.row=of.interest, iters=1e5)
cor.pairs

As with variance estimation, if uninteresting substructure is present, this should be blocked on using the block= argument in both correlateNull and correlatePairs. This avoids strong correlations due to the blocking factor.

cor.pairs2 <- correlatePairs(sce, subset.row=of.interest, 
    block=sce$donor, iters=1e5)

The pairs can be used for choosing marker genes in experimental validation, and to construct gene-gene association networks. In other situations, the pairs may not be of direct interest - rather, we just want to know whether a gene is correlated with any other gene. This is often the case if we are to select a set of correlated HVGs for use in downstream steps like clustering or dimensionality reduction. To do so, we use correlateGenes() to compute a single set of statistics for each gene, rather than for each pair.

cor.genes <- correlateGenes(cor.pairs)
cor.genes

Significant correlations are defined at a false discovery rate (FDR) threshold of, e.g., 5%. Note that the p-values are calculated by permutation and will have a lower bound. If there were insufficient permutation iterations, a warning will be issued suggesting that more iterations be performed.

Converting to other formats

The SingleCellExperiment object can be easily converted into other formats using the convertTo method. This allows analyses to be performed using other pipelines and packages. For example, if DE analyses were to be performed using r Biocpkg("edgeR"), the count data in sce could be used to construct a DGEList.

y <- convertTo(sce, type="edgeR")

By default, rows corresponding to spike-in transcripts are dropped when get.spikes=FALSE. As such, the rows of y may not correspond directly to the rows of sce -- users should match by row name to ensure correct cross-referencing between objects. Normalization factors are also automatically computed from the size factors.

The same conversion strategy roughly applies to the other supported formats. DE analyses can be performed using r Biocpkg("DESeq2") by converting the object to a DESeqDataSet. Cells can be ordered on pseudotime with r Biocpkg("monocle") by converting the object to a CellDataSet (in this case, normalized unlogged expression values are stored).

Getting help

Further information can be obtained by examining the documentation for each function (e.g., ?convertTo); reading the Orchestrating Single Cell Analysis book; or asking for help on the Bioconductor support site (please read the posting guide beforehand).

Session information

sessionInfo()

References



Try the scran package in your browser

Any scripts or data that you put into this service are public.

scran documentation built on April 17, 2021, 6:09 p.m.