knitr::opts_chunk$set(echo = TRUE)
Here we download and analyze the Papa inflammatory bowel disease (IBD) dataset
(Papa et al., 2012).
The dataset includes stool samples from 24 patients nonIBD controls (i.e. patients
with GI symptoms but not IBD), 23 patients with Crohn's disease and 43 with
ulcerative colitis (both categorized as inflammatory bowel disease, IBD).
We'll download the processed OTU tables from
Zenodo and unzip them in the data/ibd_alm_results
folder.
For inflammatory bowel disease, we do expect some amount of truly differentially abundant OTUs, but not as many as in diarrhea (e.g. Schubert et al., 2014). This dataset will hopefully provide an intermediate non-extreme case study.
library(dplyr) library(tidyr) library(ggplot2) library(magrittr) library(SummarizedBenchmark) ## load helper functions for (f in list.files("../R", "\\.(r|R)$", full.names = TRUE)) { source(f) } ## project data/results folders datdir <- "data" resdir <- "results" sbdir <- "../../results/microbiome" dir.create(datdir, showWarnings = FALSE) dir.create(resdir, showWarnings = FALSE) dir.create(sbdir, showWarnings = FALSE) ibd_data <- file.path(datdir, "ibd_alm_results.tar.gz") ibd_dir <- file.path(datdir, "ibd_alm_results") otu_result_file <- file.path(resdir, "papa-otus-results.rds") otu_bench_file <- file.path(sbdir, "papa-otus-benchmark.rds") gns_result_file <- file.path(resdir, "papa-genus-results.rds") gns_bench_file <- file.path(sbdir, "papa-genus-benchmark.rds") gns_bench_file_log_ubi <- file.path(sbdir, "papa-genus-log-ubiquity-benchmark.rds") gns_bench_file_abun <- file.path(sbdir, "papa-genus-abun-benchmark.rds") gns_bench_file_uninf <- file.path(sbdir, "papa-genus-uninf-benchmark.rds")
if (!file.exists(ibd_data)) { download.file("https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz", destfile = ibd_data) } if (!file.exists(file.path(ibd_dir, "ibd_alm.metadata.txt"))) { untar(ibd_data, exdir = datdir) }
Next, we'll read in the unzipped OTU table and metadata files into R.
## load OTU table and metadata otu <- read.table(file.path(ibd_dir, "RDP", "ibd_alm.otu_table.100.denovo.rdp_assigned")) meta <- read.csv(file.path(ibd_dir, "ibd_alm.metadata.txt"), sep='\t') ## Keep only samples with the right DiseaseState metadata meta <- dplyr::filter(meta, DiseaseState %in% c("nonIBD", "UC", "CD")) ### add "X" in front of sample IDs because of how R read in the OTU table meta$X <- paste0("X", meta$X) ## keep only samples with both metadata and 16S data keep_samples <- intersect(colnames(otu), meta$X) otu <- otu[, keep_samples] meta <- dplyr::filter(meta, X %in% keep_samples)
A brief aside: what's the count distribution of these OTUs?
hist(colSums(otu))
Even though we'll be using OTU-wise covariates, we still need to do some
filtering/cleaning of the OTUs so that the differential abundance test doesn't get wonky.
(This may happen after removing shallow samples, I think.)
We still apply a minimum threshold of 10 reads per OTU across all samples.
After removing these shallow OTUs, we also get rid of any samples with too few reads.
We define the minimum number of reads per OTU in min_otu
, and
the minimum number of reads per sample in min_sample
.
After we've removed any shallow OTUs and samples, we'll convert the OTU table to relative abundances.
min_otu <- 10 minp_otu <- 0.01 min_sample <- 100 ## Remove OTUs w/ <= min reads, w/ <= min prop, samples w/ <= min reads otu <- otu[rowSums(otu) > min_otu, ] otu <- otu[rowSums(otu > 0) / ncol(otu) > minp_otu, ] otu <- otu[, colSums(otu) > min_sample] ## Update metadata with new samples meta <- dplyr::filter(meta, X %in% colnames(otu)) ## Remove empty OTUs otu <- otu[rowSums(otu) > 0, ] ## Convert to relative abundance abun_otu <- t(t(otu) / rowSums(t(otu))) ## Add pseudo counts zeroabun <- 0 abun_otu <- abun_otu + zeroabun
Next, we need to calculate the pvalues, effect size, and standard error for each OTU.
Here, we'll compare UC/CD vs. healthy.
We'll put these results into a dataframe, and label the columns with the
standardized names for downstream use (pval
, SE
, effect_size
, test_statistic
).
The test statistic is the one returned by wilcox.test()
.
Note that the effect here is calculated as logfold change of mean abundance in controls
relative to cases (i.e. log(mean_abun[controls]/mean_abun[cases])
).
While we're at it, we'll also calculate the mean abundance and ubiquity (detection rate)
of each OTU. Later, we can assign their values to a new column called ind_covariate
for use in downstream steps.
if (!file.exists(otu_result_file)) { res <- test_microbiome(abundance = abun_otu, shift = zeroabun, is_case = meta$DiseaseState %in% c("CD", "UC")) saveRDS(res, file = otu_result_file) } else { res <- readRDS(otu_result_file) }
Finally, let's try to add phylogeny as covariates. Here we'll have columns for each separate taxonomic level.
res <- tidyr::separate(res, otu, c("kingdom", "phylum", "class", "order", "family", "genus", "species", "denovo"), sep = ";", remove = FALSE)
Here we look to see if the covariates do indeed look informative.
strat_hist(res, pvalue="pval", covariate="ubiquity", maxy=10, binwidth=0.05)
rank_scatter(res, pvalue="pval", covariate="ubiquity")
strat_hist(res, pvalue="pval", covariate="mean_abun_present", maxy=8, binwidth=0.05)
rank_scatter(res, pvalue="pval", covariate="mean_abun_present")
Let's look at phylum-level stratification first. A priori, I might expect Proteobacteria to be enriched for low p-values? But I don't know if that's super legit, and Eric doesn't seem to think that phylogeny will be informative at all...
ggplot(res, aes(x=pval)) + geom_histogram() + facet_wrap(~phylum, scales = "free")
Not really informative either.
Let's use ubiquity
as our ind_covariate
.
res <- dplyr::mutate(res, ind_covariate = ubiquity)
First, we'll create an object of BenchDesign
class to hold the data and
add the benchmark methods to the BenchDesign
object. We remove ASH from
the default comparison.
Then, we'll construct the SummarizedBenchmark
object, which will run
the functions specified in each method (these are actually sourced in from the
helper scripts).
if (!file.exists(otu_bench_file)) { bd <- initializeBenchDesign() bd <- dropBMethod(bd, "ashq") sb <- buildBench(bd, data = res, ftCols = "ind_covariate") metadata(sb)$data_download_link <- "https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz" saveRDS(sb, file = otu_bench_file) } else { sb <- readRDS(otu_bench_file) }
Next, we'll add the default performance metric for q-value assays. First, we have to rename the assay to 'qvalue'.
assayNames(sb) <- "qvalue" sb <- addDefaultMetrics(sb)
Now, we'll plot the results.
rejections_scatter( sb, as_fraction=FALSE, supplementary=FALSE) rejection_scatter_bins(sb, covariate="ind_covariate", supplementary=FALSE)
assays(sb)[["qvalue"]][, "ashq"] %>% max() sum(is.na(assays(sb)[["qvalue"]][, "scott-empirical"])) sum(is.na(assays(sb)[["qvalue"]][, "lfdr"]))
covariateLinePlot(sb, alpha = 0.05, covname = "ind_covariate")
## add column with otu names genus_df <- as.data.frame(abun_otu) genus_df <- dplyr::as_tibble(genus_df, rownames = "otu") ## just get genus information genus_df <- dplyr::mutate(genus_df, genus = sapply(strsplit(otu, ";"), `[`, 6)) ## gather into tidy format genus_df <- tidyr::gather(genus_df, key = "sample", value = "abun", -otu, -genus) ## get rid of unannoated genera, and sum abundances for genera genus_df <- genus_df %>% dplyr::filter(genus != "g__") %>% dplyr::group_by(genus, sample) %>% dplyr::summarise(total_abun = sum(abun)) %>% ungroup() ## convert back to longform genus_df <- tidyr::spread(genus_df, sample, total_abun) genus_df <- as.data.frame(as.list(dplyr::select(genus_df, -genus)), row.names = genus_df$genus) ## re-order columns to match metadata genus_df <- genus_df[, match(meta$X, colnames(genus_df))] ## use matrix genus <- as.matrix(genus_df)
if (!file.exists(gns_result_file)) { res <- test_microbiome(abundance = genus, shift = zeroabun, is_case = meta$DiseaseState %in% c("CD", "UC")) saveRDS(res, file = gns_result_file) } else { res <- readRDS(gns_result_file) }
Add random (uninformative) covariate to test at the genus level.
set.seed(33128) res$rand_covar <- rnorm(nrow(res))
Here we look to see if the covariates do indeed look informative.
strat_hist(res, pvalue="pval", covariate="ubiquity", maxy=10, binwidth=0.05)
rank_scatter(res, pvalue="pval", covariate="ubiquity")
Something weird is happening with p=0.20 and p=0.40 - maybe this has something to do with ties? Either way, ubiquity looks to be a bit informative (from the scatter plot, but not really the histograms...)
strat_hist(res, pvalue="pval", covariate="mean_abun_present", maxy=8, binwidth=0.05)
rank_scatter(res, pvalue="pval", covariate="mean_abun_present")
strat_hist(res, pvalue="pval", covariate="rand_covar", maxy=8, binwidth=0.05)
rank_scatter(res, pvalue="pval", covariate="rand_covar")
Let's use ubiquity
as our ind_covariate
.
res <- dplyr::mutate(res, ind_covariate = ubiquity)
First, we'll create an object of BenchDesign
class to hold the data and
add the benchmark methods to the BenchDesign
object. We remove ASH from
the comparison.
Then, we'll construct the SummarizedBenchmark
object, which will run
the functions specified in each method (these are actually sourced in from the
helper scripts).
if (!file.exists(gns_bench_file)) { bd <- initializeBenchDesign() bd <- dropBMethod(bd, "ashq") sb <- buildBench(bd, data = res, ftCols = "ind_covariate") metadata(sb)$data_download_link <- "https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz" saveRDS(sb, file = gns_bench_file) } else { sb <- readRDS(gns_bench_file) }
Next, we'll add the default performance metric for q-value assays. First, we have to rename the assay to 'qvalue'.
assayNames(sb) <- "qvalue" sb <- addDefaultMetrics(sb)
Now, we'll plot the results.
rejections_scatter(sb, as_fraction=FALSE, supplementary=FALSE) rejection_scatter_bins(sb, covariate="ind_covariate", supplementary=FALSE)
plotFDRMethodsOverlap(sb, alpha=0.1, supplementary=FALSE, order.by="freq", nsets=100 )
methods <- c( "lfdr", "ihw-a10", "bl-df03", "qvalue", "bh", "bonf" ) plotCovariateBoxplots(sb, alpha=0.1, nsets=6, methods=methods)
covariateLinePlot(sb, alpha = 0.05, covname = "ind_covariate")
Let's use log10(ubiquity
) as our ind_covariate
.
res <- dplyr::mutate(res, ind_covariate = log10(ubiquity))
First, we'll create an object of BenchDesign
class to hold the data and
add the benchmark methods to the BenchDesign
object. We remove ASH from
the comparison.
Then, we'll construct the SummarizedBenchmark
object, which will run
the functions specified in each method (these are actually sourced in from the
helper scripts).
if (!file.exists(gns_bench_file_log_ubi)) { bd <- initializeBenchDesign() bd <- dropBMethod(bd, "ashq") sb <- buildBench(bd, data = res, ftCols = "ind_covariate") metadata(sb)$data_download_link <- "https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz" saveRDS(sb, file = gns_bench_file_log_ubi) } else { sb <- readRDS(gns_bench_file_log_ubi) }
Next, we'll add the default performance metric for q-value assays. First, we have to rename the assay to 'qvalue'.
assayNames(sb) <- "qvalue" sb <- addDefaultMetrics(sb)
Now, we'll plot the results.
rejections_scatter(sb, as_fraction=FALSE, supplementary=FALSE) rejection_scatter_bins(sb, covariate="ind_covariate", supplementary=FALSE)
plotFDRMethodsOverlap(sb, alpha=0.1, supplementary=FALSE, order.by="freq", nsets=100 )
methods <- c( "lfdr", "ihw-a10", "bl-df03", "qvalue", "bh", "bonf" ) plotCovariateBoxplots(sb, alpha=0.1, nsets=6, methods=methods)
covariateLinePlot(sb, alpha = 0.05, covname = "ind_covariate")
Let's use mean_abun_present
as our ind_covariate
.
res <- dplyr::mutate(res, ind_covariate = mean_abun_present)
First, we'll create an object of BenchDesign
class to hold the data and
add the benchmark methods to the BenchDesign
object. We remove ASH from
the comparison.
Then, we'll construct the SummarizedBenchmark
object, which will run
the functions specified in each method (these are actually sourced in from the
helper scripts).
if (!file.exists(gns_bench_file_abun)) { bd <- initializeBenchDesign() bd <- dropBMethod(bd, "ashq") sb <- buildBench(bd, data = res, ftCols = "ind_covariate") metadata(sb)$data_download_link <- "https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz" saveRDS(sb, file = gns_bench_file_abun) } else { sb <- readRDS(gns_bench_file_abun) }
Next, we'll add the default performance metric for q-value assays. First, we have to rename the assay to 'qvalue'.
assayNames(sb) <- "qvalue" sb <- addDefaultMetrics(sb)
Now, we'll plot the results.
rejections_scatter(sb, as_fraction=FALSE, supplementary=FALSE) rejection_scatter_bins(sb, covariate="ind_covariate", supplementary=FALSE)
plotFDRMethodsOverlap(sb, alpha=0.1, supplementary=FALSE, order.by="freq", nsets=100 )
methods <- c( "lfdr", "ihw-a10", "bl-df03", "qvalue", "bh", "bonf" ) plotCovariateBoxplots(sb, alpha=0.1, nsets=6, methods=methods)
covariateLinePlot(sb, alpha = 0.05, covname = "ind_covariate")
Let's use the random (uninformative) covariate as our ind_covariate
.
res <- dplyr::mutate(res, ind_covariate = rand_covar)
First, we'll create an object of BenchDesign
class to hold the data and
add the benchmark methods to the BenchDesign
object. We remove ASH from
the comparison.
Then, we'll construct the SummarizedBenchmark
object, which will run
the functions specified in each method (these are actually sourced in from the
helper scripts).
if (!file.exists(gns_bench_file_uninf)) { bd <- initializeBenchDesign() bd <- dropBMethod(bd, "ashq") sb_rand <- buildBench(bd, data = res, ftCols = "ind_covariate") metadata(sb_rand)$data_download_link <- "https://zenodo.org/record/840333/files/ibd_alm_results.tar.gz" saveRDS(sb_rand, file = gns_bench_file_uninf) } else { sb_rand <- readRDS(gns_bench_file_uninf) }
Next, we'll add the default performance metric for q-value assays. First, we have to rename the assay to 'qvalue'.
assayNames(sb_rand) <- "qvalue" sb_rand <- addDefaultMetrics(sb_rand)
Now, we'll plot the results.
rejections_scatter(sb_rand, as_fraction=FALSE, supplementary=FALSE) rejection_scatter_bins(sb_rand, covariate="ind_covariate", supplementary=FALSE)
plotFDRMethodsOverlap(sb_rand, alpha=0.1, supplementary=FALSE, order.by="freq", nsets=100 )
methods <- c( "lfdr", "ihw-a10", "bl-df03", "qvalue", "bh", "bonf" ) plotCovariateBoxplots(sb_rand, alpha=0.1, nsets=6, methods=methods)
covariateLinePlot(sb_rand, alpha = 0.05, covname = "ind_covariate")
Here we compare the method ranks for the different comparisons at alpha = 0.10.
plotMethodRanks(c(otu_bench_file, gns_bench_file, gns_bench_file_log_ubi,gns_bench_file_abun, gns_bench_file_uninf), colLabels = c("OTU-ubiquity", "Genus-ubiquity", "Genus-log", "Genus-abun", "Genus-uninf"), alpha = 0.10, xlab = "Comparison")
sessionInfo()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.