require(knitr)
opts_chunk$set(error=FALSE, message=FALSE, warning=FALSE)
library(scran)
library(BiocParallel)
register(SerialParam()) # avoid problems with fastMNN parallelization.
set.seed(100)

Introduction

Single-cell RNA sequencing (scRNA-seq) is a widely used technique for profiling gene expression in individual cells. This allows molecular biology to be studied at a resolution that cannot be matched by bulk sequencing of cell populations. The r Biocpkg("scran") package implements methods to perform low-level processing of scRNA-seq data, including cell cycle phase assignment, variance modelling and testing for marker genes and gene-gene correlations. This vignette provides brief descriptions of these methods and some toy examples to demonstrate their use.

Note: A more comprehensive description of the use of r Biocpkg("scran") (along with other packages) in a scRNA-seq analysis workflow is available at https://osca.bioconductor.org.

Setting up the data

We start off with a count matrix where each row is a gene and each column is a cell. These can be obtained by mapping read sequences to a reference genome, and then counting the number of reads mapped to the exons of each gene. (See, for example, the r Biocpkg("Rsubread") package to do both of these tasks.) Alternatively, pseudo-alignment methods can be used to quantify the abundance of each transcript in each cell. For simplicity, we will pull out an existing dataset from the r Biocpkg("scRNAseq") package.

library(scRNAseq)
sce <- GrunPancreasData()
sce

This particular dataset is taken from a study of the human pancreas with the CEL-seq protocol [@grun2016denovo]. It is provided as a SingleCellExperiment object (from the r Biocpkg("SingleCellExperiment") package), which contains the raw data and various annotations. We perform some cursory quality control to remove cells with low total counts or high spike-in percentages:

library(scuttle)
qcstats <- perCellQCMetrics(sce)
qcfilter <- quickPerCellQC(qcstats, percent_subsets="altexps_ERCC_percent")
sce <- sce[,!qcfilter$discard]
summary(qcfilter$discard)

Cell-specific biases are normalized using the computeSumFactors method, which implements the deconvolution strategy for scaling normalization [@lun2016pooling].

library(scran)
clusters <- quickCluster(sce)
sce <- computeSumFactors(sce, clusters=clusters)
summary(sizeFactors(sce))
sce <- logNormCounts(sce)

Variance modelling

We identify genes that drive biological heterogeneity in the data set by modelling the per-gene variance. By only using a subset of highly variable genes in downstream analyses like clustering, we improve resolution of biological structure by removing uninteresting genes driven by technical noise. We decompose the total variance of each gene into its biological and technical components by fitting a trend to the endogenous variances [@lun2016step]. The fitted value of the trend is used as an estimate of the technical component, and we subtract the fitted value from the total variance to obtain the biological component for each gene.

dec <- modelGeneVar(sce)
plot(dec$mean, dec$total, xlab="Mean log-expression", ylab="Variance")
curve(metadata(dec)$trend(x), col="blue", add=TRUE)

If we have spike-ins, we can use them to fit the trend instead. This provides a more direct estimate of the technical variance and avoids assuming that most genes do not exhibit biological variaility.

dec2 <- modelGeneVarWithSpikes(sce, 'ERCC')
plot(dec2$mean, dec2$total, xlab="Mean log-expression", ylab="Variance")
points(metadata(dec2)$mean, metadata(dec2)$var, col="red")
curve(metadata(dec2)$trend(x), col="blue", add=TRUE)

If we have some uninteresting factors of variation, we can block on these using block=. This will perform the trend fitting and decomposition within each block before combining the statistics across blocks for output. Statistics for each individual block can also be extracted for further inspection.

# Turned off weighting to avoid overfitting for each donor.
dec3 <- modelGeneVar(sce, block=sce$donor, density.weights=FALSE)
per.block <- dec3$per.block
par(mfrow=c(3, 2))
for (i in seq_along(per.block)) {
    decX <- per.block[[i]]
    plot(decX$mean, decX$total, xlab="Mean log-expression", 
        ylab="Variance", main=names(per.block)[i])
    curve(metadata(decX)$trend(x), col="blue", add=TRUE)
}

We can then extract some top genes for use in downstream procedures using the getTopHVGs() function. A variety of different strategies can be used to define a subset of interesting genes:

# Get the top 10% of genes.
top.hvgs <- getTopHVGs(dec, prop=0.1)

# Get the top 2000 genes.
top.hvgs2 <- getTopHVGs(dec, n=2000)

# Get all genes with positive biological components.
top.hvgs3 <- getTopHVGs(dec, var.threshold=0)

# Get all genes with FDR below 5%.
top.hvgs4 <- getTopHVGs(dec, fdr.threshold=0.05)

The selected subset of genes can then be passed to the subset.row argument (or equivalent) in downstream steps. This process is demonstrated below for the PCA step:

sce <- fixedPCA(sce, subset.row=top.hvgs)
reducedDimNames(sce)

Automated PC choice

Principal components analysis is commonly performed to denoise and compact the data prior to downstream analysis. A common question is how many PCs to retain; more PCs will capture more biological signal at the cost of retaining more noise and requiring more computational work. One approach to choosing the number of PCs is to use the technical component estimates to determine the proportion of variance that should be retained. This is implemented in denoisePCA(), which takes the estimates returned by modelGeneVar() or friends. (For greater accuracy, we use the fit with the spikes; we also subset to only the top HVGs to remove noise.)

sced <- denoisePCA(sce, dec2, subset.row=getTopHVGs(dec2, prop=0.1))
ncol(reducedDim(sced, "PCA"))

Another approach is based on the assumption that each subpopulation should be separated from each other on a different axis of variation. Thus, we choose the number of PCs that is not less than the number of subpopulations (which are unknown, of course, so we use the number of clusters as a proxy). It is then a simple matter to subset the dimensionality reduction result to the desired number of PCs.

output <- getClusteredPCs(reducedDim(sce))
npcs <- metadata(output)$chosen
reducedDim(sce, "PCAsub") <- reducedDim(sce, "PCA")[,1:npcs,drop=FALSE]
npcs

Graph-based clustering

Clustering of scRNA-seq data is commonly performed with graph-based methods due to their relative scalability and robustness. r Biocpkg('scran') provides several graph construction methods based on shared nearest neighbors [@xu2015identification] through the buildSNNGraph() function. This is most commonly generated from the selected PCs, after which community detection methods from the r CRANpkg("igraph") package can be used to explicitly identify clusters.

# In this case, using the PCs that we chose from getClusteredPCs().
g <- buildSNNGraph(sce, use.dimred="PCAsub")
cluster <- igraph::cluster_walktrap(g)$membership

# Assigning to the 'colLabels' of the 'sce'.
colLabels(sce) <- factor(cluster)
table(colLabels(sce))

By default, buildSNNGraph() uses the mode of shared neighbor weighting described by @xu2015identification, but other weighting methods (e.g., the Jaccard index) are also available by setting type=. An unweighted $k$-nearest neighbor graph can also be constructed with buildKNNGraph().

We can then use methods from r Biocpkg("scater") to visualize this clustering on a $t$-SNE plot. Note that colLabels()<- will just add the cluster assignments to the "label" field of the colData; however, any name can be used as long as downstream functions are adjusted appropriately.

library(scater)
sce <- runTSNE(sce, dimred="PCAsub")
plotTSNE(sce, colour_by="label", text_by="label")

For graph-based methods, another diagnostic is to examine the ratio of observed to expected edge weights for each pair of clusters (closely related to the modularity score used in many cluster_* functions). We would usually expect to see high observed weights between cells in the same cluster with minimal weights between clusters, indicating that the clusters are well-separated. Off-diagonal entries indicate that some clusters are closely related, which is useful to know for checking that they are annotated consistently.

library(bluster)
ratio <- pairwiseModularity(g, cluster, as.ratio=TRUE)

library(pheatmap)
pheatmap(log10(ratio+1), cluster_cols=FALSE, cluster_rows=FALSE,
    col=rev(heat.colors(100)))

A more general diagnostic involves bootstrapping to determine the stability of the partitions between clusters. Given a clustering function, the bootstrapCluster() function uses bootstrapping to compute the co-assignment probability for each pair of original clusters, i.e., the probability that one randomly chosen cell from each cluster is assigned to the same cluster in the bootstrap replicate . Larger probabilities indicate that the separation between those clusters is unstable to the extent that it is sensitive to sampling noise, and thus should not be used for downstream inferences.

ass.prob <- bootstrapStability(sce, FUN=function(x) {
    g <- buildSNNGraph(x, use.dimred="PCAsub")
    igraph::cluster_walktrap(g)$membership
}, clusters=sce$cluster)

pheatmap(ass.prob, cluster_cols=FALSE, cluster_rows=FALSE,
    col=colorRampPalette(c("white", "blue"))(100))

If necessary, further subclustering can be performed conveniently using the quickSubCluster() wrapper function. This splits the input SingleCellExperiment into several smaller objects containing cells from each cluster and performs another round of clustering within that cluster, using a freshly identified set of HVGs to improve resolution for internal structure.

subout <- quickSubCluster(sce, groups=colLabels(sce))
table(metadata(subout)$subcluster) # formatted as '<parent>.<subcluster>'

Identifying marker genes

The scoreMarkers() wrapper function will perform differential expression comparisons between pairs of clusters to identify potential marker genes. For each pairwise comparison, we compute a variety of effect sizes to quantify the differences between those clusters.

For each cluster, we then summarize the effect sizes from all comparisons to other clusters. Specifically, we compute the mean, median, minimum and maximum effect size across all comparisons. This yields a series of statistics for each effect size and summary type, e.g., mean.AUC.

# Uses clustering information from 'colLabels(sce)' by default:
markers <- scoreMarkers(sce)
markers
colnames(markers[[1]])

The idea is to use this information to sort the DataFrame for each cluster based on one of these metrics to obtain a ranking of the strongest candidate markers. A good default approach is to sort by either the mean AUC or Cohen's $d$ in decreasing order. This focuses on marker genes that are upregulated in the cluster of interest compared to most other clusters (hopefully). For example, we can get a quick summary of the best markers for cluster 1 using the code below:

# Just showing the first few columns for brevity.
markers[[1]][order(markers[[1]]$mean.AUC, decreasing=TRUE),1:4]

The other summaries provide additional information about the comparisons to other clusters without requiring examination of all pairwise changes. For example, a positive median AUC means that the gene is upregulated in this cluster against most (i.e., at least 50%) of the other clusters. A positive minimum AUC means that the gene is upregulated against all other clusters, while a positive maximum AUC indicates upregulation against at least one other cluster. It can be helpful to sort on the median instead, if the mean is too sensitive to outliers; or to sort on the minimum, if we wish to identify genes that are truly unique to a particular cluster.

That said, if the full set of pairwise effects is desired, we can obtain them with full.stats=TRUE. This yields a nested DataFrame containing the effects from each pairwise comparison to another cluster.

markers <- scoreMarkers(sce, full.stats=TRUE)
markers[[1]]$full.logFC.cohen

In addition, we report some basic descriptive statistics such as the mean log-expression in each cluster and the proportion of cells with detectable expression. The grand mean of these values across all other clusters is also reported. Note that, because we use a grand mean, we are unaffected by differences in the number of cells between clusters; the same applies for our effect sizes as they are computed by summarizing pairwise comparisons rather than by aggregating cells from all other clusters into a single group.

Detecting correlated genes

Another useful procedure is to identify significant pairwise correlations between pairs of HVGs. The idea is to distinguish between HVGs caused by random stochasticity, and those that are driving systematic heterogeneity, e.g., between subpopulations. Correlations are computed by the correlatePairs method using a slightly modified version of Spearman's rho, tested against the null hypothesis of zero correlation using the same method in cor.test().

# Using the first 200 HVs, which are the most interesting anyway.
of.interest <- top.hvgs[1:200]
cor.pairs <- correlatePairs(sce, subset.row=of.interest)
cor.pairs

As with variance estimation, if uninteresting substructure is present, this should be blocked on using the block= argument. This avoids strong correlations due to the blocking factor.

cor.pairs2 <- correlatePairs(sce, subset.row=of.interest, block=sce$donor)

The pairs can be used for choosing marker genes in experimental validation, and to construct gene-gene association networks. In other situations, the pairs may not be of direct interest - rather, we just want to know whether a gene is correlated with any other gene. This is often the case if we are to select a set of correlated HVGs for use in downstream steps like clustering or dimensionality reduction. To do so, we use correlateGenes() to compute a single set of statistics for each gene, rather than for each pair.

cor.genes <- correlateGenes(cor.pairs)
cor.genes

Significant correlations are defined at a false discovery rate (FDR) threshold of, e.g., 5%. Note that the p-values are calculated by permutation and will have a lower bound. If there were insufficient permutation iterations, a warning will be issued suggesting that more iterations be performed.

Converting to other formats

The SingleCellExperiment object can be easily converted into other formats using the convertTo method. This allows analyses to be performed using other pipelines and packages. For example, if DE analyses were to be performed using r Biocpkg("edgeR"), the count data in sce could be used to construct a DGEList.

y <- convertTo(sce, type="edgeR")

By default, rows corresponding to spike-in transcripts are dropped when get.spikes=FALSE. As such, the rows of y may not correspond directly to the rows of sce -- users should match by row name to ensure correct cross-referencing between objects. Normalization factors are also automatically computed from the size factors.

The same conversion strategy roughly applies to the other supported formats. DE analyses can be performed using r Biocpkg("DESeq2") by converting the object to a DESeqDataSet. Cells can be ordered on pseudotime with r Biocpkg("monocle") by converting the object to a CellDataSet (in this case, normalized unlogged expression values are stored).

Getting help

Further information can be obtained by examining the documentation for each function (e.g., ?convertTo); reading the Orchestrating Single Cell Analysis book; or asking for help on the Bioconductor support site (please read the posting guide beforehand).

Session information

sessionInfo()

References



MarioniLab/scran documentation built on March 7, 2024, 1:45 p.m.