# To examine objects: # devtools::load_all(export_all=F) ; qwraps2::lazyload_cache_dir("vignettes/V5_slam_seq_cache/html") knitr::opts_chunk$set(cache=TRUE, autodep=TRUE)
weitrix
is a jack of all trades. This vignette demonstrates the use of weitrix
with proportion data. One difficulty is that when a proportion is exactly zero its variance should be exactly zero as well, leading to an infinite weight. To get around this, we slightly inflate the estimate of the variance for proportions near zero. This is not perfect, but calibration plots allow us to do this with our eyes open, and provide reassurance that it will not greatly interfere with downstream analysis.
We look at GSE99970, a SLAM-Seq experiment. In SLAM-Seq a portion of uracils are replaced with 4-thiouridine (s4U) during transcription, which by some clever chemistry leads to "T"s becoming "C"s in resultant RNA-Seq reads. The proportion of converted "T"s is an indication of the proportion of new transcripts produced while s4U was applied. In this experiment mouse embryonic stem cells were exposed to s4U for 24 hours, then washed out and sampled at different time points. The experiment lets us track the decay rates of transcripts.
library(tidyverse) library(ComplexHeatmap) library(weitrix) # BiocParallel supports multiple backends. # If the default hangs or errors, try others. # The most reliable backed is to use serial processing BiocParallel::register( BiocParallel::SerialParam() )
The quantity of interest here is the proportion of "T"s converted to "C"s. We load the coverage and conversions, and calculate this ratio.
As an initial weighting, we use the coverage. Notionally, each proportion is an average of this many 1s and 0s. The more values averaged, the more accurate this is.
coverage <- system.file("GSE99970", "GSE99970_T_coverage.csv.gz", package="weitrix") %>% read_csv() %>% column_to_rownames("gene") %>% as.matrix() conversions <- system.file("GSE99970", "GSE99970_T_C_conversions.csv.gz", package="weitrix") %>% read_csv() %>% column_to_rownames("gene") %>% as.matrix() # Calculate proportions, create weitrix wei <- as_weitrix( conversions/coverage, coverage ) dim(wei) # We will only use genes where at least 30 conversions were observed good <- rowSums(conversions) >= 30 wei <- wei[good,] # Add some column data from the names parts <- str_match(colnames(wei), "(.*)_(Rep_.*)") colData(wei)$group <- fct_inorder(parts[,2]) colData(wei)$rep <- fct_inorder(paste0("Rep_",parts[,3])) rowData(wei)$mean_coverage <- rowMeans(weitrix_weights(wei)) wei colMeans(weitrix_x(wei), na.rm=TRUE)
We want to estimate the variance of each observation. We could model this exactly as each observed "T" encoded as 0 for unconverted and 1 for converted, having a Bernoulli distribution with a variance of $\mu(1-\mu)$ for mean $\mu$. The observed proportions are then an average of such values. For $n$ such values, the variance of this average would be
$$ \sigma^2 = \frac{\mu(1-\mu)}{n} $$
However if our estimate of $\mu$ is exactly zero, the variance would become zero and so the weight would become infinite. To avoid infinities:
This is achieved using the mu_min
argument to weitrix_calibrate_all
. A natural choice to clip at is 0.001, the background rate of apparent T to C conversions seen due to sequencing errors.
A further possible problem is that biological variation does not disappear with greater and greater $n$, so dividing by $n$ may be over-optimistic. We will supply $n$ (stored in weights) to a gamma GLM on the squared residuals with log link, using the Bernoulli variance as an offset. This GLM is then used to assign calibrated weights.
# Compute an initial fit to provide residuals fit <- weitrix_components(wei, design=~group) cal <- weitrix_calibrate_all(wei, design = fit, trend_formula = ~ log(weight) + offset(log(mu*(1-mu))), mu_min=0.001, mu_max=0.999) metadata(cal)$weitrix$all_coef
This trend formula was validated as adequate (although not perfect) by examining calibration plots, as demonstrated below.
The amount of conversion differs a great deal between timepoints, so we examine them individually.
weitrix_calplot(wei, fit, cat=group, covar=mu, guides=FALSE) + coord_cartesian(xlim=c(0,0.1)) + labs(title="Before calibration") weitrix_calplot(cal, fit, cat=group, covar=mu) + coord_cartesian(xlim=c(0,0.1)) + labs(title="After calibration")
Ideally the red lines would all be horizontal. This isn't possible for very small proportions, since this becomes a matter of multiplying zero by infinity.
We can also examine the weighted residuals vs the original weights (the coverage of "T"s).
weitrix_calplot(wei, fit, cat=group, covar=log(weitrix_weights(wei)), guides=FALSE) + labs(title="Before calibration") weitrix_calplot(cal, fit, cat=group, covar=log(weitrix_weights(wei))) + labs(title="After calibration")
As a quick way to examine the data, we look for two components of variation. This reveals fast decaying and slow decaying genes.
comp <- weitrix_components(cal, 2, n_restarts=1)
These are the scores for the two components:
matrix_long(comp$col[,-1], row_info=colData(cal)) %>% ggplot(aes(x=group,y=value)) + geom_jitter(width=0.2,height=0) + facet_grid(col~.)
Component C1 highlights fast-decaying genes:
fast <- weitrix_confects(cal, comp$col, "C1") fast Heatmap( weitrix_x(cal)[ head(fast$table$name, 10) ,], name="Proportion converted", cluster_columns=FALSE, cluster_rows=FALSE)
Component C2 highlights slow decaying genes:
slow <- weitrix_confects(cal, comp$col, "C2") slow Heatmap( weitrix_x(cal)[ head(slow$table$name, 10) ,], name="Proportion converted", cluster_columns=FALSE, cluster_rows=FALSE)
Further examination might be based on explicitly modelling the decay process.
Data was extracted and totalled into genes with:
library(tidyverse) download.file("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE99970&format=file", "GSE99970_RAW.tar") untar("GSE99970_RAW.tar", exdir="GSE99970_RAW") filenames <- list.files("GSE99970_RAW", full.names=TRUE) samples <- str_match(filenames,"mESC_(.*)\\.tsv\\.gz")[,2] dfs <- map(filenames, read_tsv, comment="#") coverage <- do.call(cbind, map(dfs, "CoverageOnTs")) %>% rowsum(dfs[[1]]$Name) conversions <- do.call(cbind, map(dfs, "ConversionsOnTs")) %>% rowsum(dfs[[1]]$Name) colnames(conversions) <- colnames(coverage) <- samples reorder <- c(1:3, 25:27, 4:24) coverage[,reorder] %>% as.data.frame() %>% rownames_to_column("gene") %>% write_csv("GSE99970_T_coverage.csv.gz") conversions[,reorder] %>% as.data.frame() %>% rownames_to_column("gene") %>% write_csv("GSE99970_T_C_conversions.csv.gz")
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.