View source: R/combineRecomputedResults.R
combineRecomputedResults | R Documentation |
Combine results from multiple runs of classifySingleR
(usually against different references) into a single DataFrame.
The label from the results with the highest score for each cell is retained.
Unlike combineCommonResults
, this does not assume that each run of classifySingleR
was performed using the same set of common genes, instead recomputing the scores for comparison across references.
combineRecomputedResults(
results,
test,
trained,
quantile = 0.8,
fine.tune = TRUE,
tune.thresh = 0.05,
assay.type.test = "logcounts",
check.missing = FALSE,
warn.lost = TRUE,
allow.lost = FALSE,
num.threads = bpnworkers(BPPARAM),
BPPARAM = SerialParam()
)
results |
A list of DataFrame prediction results as returned by |
test |
A numeric matrix of single-cell expression values where rows are genes and columns are cells. Alternatively, a SummarizedExperiment object containing such a matrix. |
trained |
A list of Lists containing the trained outputs of multiple references,
equivalent to either (i) the output of |
quantile |
Numeric scalar specifying the quantile of the correlation distribution to use for computing the score, see |
fine.tune |
A logical scalar indicating whether fine-tuning should be performed. |
tune.thresh |
A numeric scalar specifying the maximum difference from the maximum correlation to use in fine-tuning. |
assay.type.test |
An integer scalar or string specifying the assay of |
check.missing |
Deprecated and ignored, as any row filtering will cause mismatches with the |
warn.lost |
Logical scalar indicating whether to emit a warning if markers from one reference in |
allow.lost |
Deprecated. |
num.threads |
Integer scalar specifying the number of threads to use for index building and classification. |
BPPARAM |
A BiocParallelParam object specifying how parallelization should be performed in other steps,
see |
Here, the strategy is to perform classification separately within each reference,
then collate the results to choose the label with the highest score across references.
For a given cell in test
, we extract its assigned label from each reference in results
, along with the marker genes associated with that label.
We take the union of the markers for the assigned labels across all references.
This defines a common feature space in which the score for each reference's assigned label is recomputed using ref
;
the label from the reference with the top recomputed score is then reported as the combined annotation for that cell.
A key aspect of this approach is that each entry of results
is generated separately for each reference.
This avoids problems with unintersting technical or biological differences between references that could otherwise introduce noise by forcing irrelevant genes into the marker list.
Similarly, the common feature space for each cell is defined from the most relevant markers across all references,
analogous to one iteration of fine-tuning using only the best labels from each reference.
Indeed, if fine-tuning is enabled, the common feature space is iteratively refined from the labels with the highest scores, using the same process described in classifySingleR
.
This allows us to distinguish between closely-related labels from different references.
A DataFrame is returned containing the annotation statistics for each cell or cluster (row).
This mimics the output of classifySingleR
and contains the following fields:
scores
, a numeric matrix of correlations containing the recomputed scores.
For any given cell, entries of this matrix are only non-NA
for the assigned label in each reference;
scores are not recomputed for the other labels.
labels
, a character vector containing the per-cell combined label across references.
reference
, an integer vector specifying the reference from which the combined label was derived.
orig.results
, a DataFrame containing results
.
It may also contain pruned.labels
if these were also present in results
.
The metadata
contains label.origin
,
a DataFrame specifying the reference of origin for each label in scores
.
It is recommended that the universe of genes be the same across all references in trained
.
(Or, at the very least, markers used in one reference are available in the others.)
This ensures that a common feature space can be generated when comparing correlations across references.
Differences in the availability of markers between references will have unpredictable effects on the comparability of correlation scores,
so a warning will be emitted by default when warn.lost=TRUE
.
Callers can protect against this by subsetting each reference to the intersection of features present across all references - this is done by default in SingleR
.
That said, this requirement may be too strict when dealing with many references with diverse feature annotations. In such cases, the recomputation for each reference will automatically use all available markers in that reference. The idea here is to avoid penalizing all references by removing an informative marker when it is only absent in a single reference. We hope that the recomputed scores are still roughly comparable if the number of lost markers is relatively low, coupled with the use of ranks in the calculation of the Spearman-based scores to reduce the influence of individual markers. This is perhaps as reliable as one might imagine.
Aaron Lun
Lun A, Bunis D, Andrews J (2020). Thoughts on a more scalable algorithm for multiple references. https://github.com/SingleR-inc/SingleR/issues/94
SingleR
and classifySingleR
, for generating predictions to use in results
.
combineCommonResults
, for another approach to combining predictions.
# Making up data.
ref <- .mockRefData(nreps=8)
ref1 <- ref[,1:2%%2==0]
ref2 <- ref[,1:2%%2==1]
ref2$label <- tolower(ref2$label)
test <- .mockTestData(ref)
# Performing classification within each reference.
test <- scuttle::logNormCounts(test)
ref1 <- scuttle::logNormCounts(ref1)
train1 <- trainSingleR(ref1, labels=ref1$label)
pred1 <- classifySingleR(test, train1)
ref2 <- scuttle::logNormCounts(ref2)
train2 <- trainSingleR(ref2, labels=ref2$label)
pred2 <- classifySingleR(test, train2)
# Combining results with recomputation of scores.
combined <- combineRecomputedResults(
results=list(pred1, pred2),
test=test,
trained=list(train1, train2))
combined[,1:5]
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.