library(OmicsEV) library(knitr) library(png) library(kableExtra) library(formattable) #library(DT) knitr::opts_chunk$set(echo = FALSE)
final_res <- readRDS(params$input) basic_metrics <- final_res$basic_metrics network_table <- final_res$network_table protein_rna <- final_res$protein_rna
In this evaluation, there are a total of r final_res$input_parameters$n_datasets
data tables. Evaluation metrics from the OmicsEV package for these data tables are included in this report, beginning with a summary of the data. The sample distribution by class for each data table is shown in the table below.
class_samples_table <- get_sample_data(final_res$input_parameters$datasets) kable(class_samples_table, "html") %>% kable_styling(bootstrap_options = "striped", full_width = F)
Detailed information for each sample included in all data tables is shown below.
kable(final_res$input_parameters$sample_list, "html") %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% scroll_box(height = "400px",width = "100%")
n_batch <- final_res$input_parameters$sample_list$batch %>% unique() %>% length dataset_names <- names(basic_metrics$datasets)
The table below provides an overview about all the quantitative metrics generated in the evaluation. For each metric, the value of the best data table is highlighted in bold and red. The details for each metric can be found in the corresponding sections below.
#save(final_res,file="a.rda") ov_table_list <- generate_overview_table(final_res) ov_table <- ov_table_list$dat kable(format_overview_table(ov_table_list), "html",digits = 4,escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% scroll_box(height = "400px",width = "100%")
The radar plot below summarizes results from the overview table above. To generate the radar plot, each metric is scaled from 0 to 1 such that higher values indicate better data quality if necessary. Scaled values are in parentheses in the table.
#save(final_res,file="a.rda") if(ncol(ov_table) > 3){ if("#identified features" %in% names(ov_table) && "#quantifiable features" %in% names(ov_table)){ # no process ov_data <- ov_table }else{ ov_data <- ov_table[,-c(2,3)] } plot_radar(ov_data) }
The table below shows the number of identified and quantified proteins or genes for each data table. Identified proteins or genes are those with a measurement in any sample in a data table whereas quantified proteins or genes are those that remain after filtering out those with missing values in more than 50% of the samples in a data table. The values in parentheses are the percentage of proteins or genes identified or quantified based on the total number of proteins or genes (r sprintf("%d",final_res$input_parameters$total_features)
) in the study species.
#save(final_res,file="a.rda") id_tables <- get_formated_id_table(final_res) kable(id_tables$show_id_table,"html",escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) #%>% #row_spec(which.max(id_tables), bold = T, color = "red")
if(!is.null(final_res$basic_metrics$datasets_id_overla)){ cat("The upset chart below shows overlap between proteins or genes identified in each data table. Numbers of proteins or genes commonly identified in different combinations of data tables are indicated in the top bar chart, and the specific combinations of data tables containing those proteins or genes are indicated with solid points below the bar chart. Total identifications for each data table are indicated on the right as ‘Set size’.\n") knitr::include_graphics(final_res$basic_metrics$datasets_id_overlap %>% normalizePath) }
The figures below show the number of proteins or genes identified/quantified (non-missing values) in each sample. Samples from different batches are coded with different shapes, and samples from different classes are coded with different colors. A separate figure is shown for each data table.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "features_number_distribution") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
The missing value distribution provides an overview of the completeness of the data. The table below shows the percent of missing values for all samples in each data table.
missing_value_table <- lapply(final_res$basic_metrics$datasets,function(y)y[["non_missing_value_ratio"]]) %>% dplyr::bind_rows() %>% dplyr::rename(non_missing_value_ratio=ratio) %>% dplyr::rename(`data table`=dataSet) kable(missing_value_table,"html",escape = FALSE, digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.max(missing_value_table$non_missing_value_ratio), bold = T, color = "red")
The following barplots show missing value distributions for each data table as number (Y axis)/percentage (number above bar) of proteins or genes with missing values in each bin. Genes are binned by proportion of samples with missing values from 0.1 to 1 in increments of 0.1, where 0.1 indicates missing values in no more than 10% of the samples, and 1 indicates missing values in all samples.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "missing_value_distribution") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
Normalized data is expected to be centered around a similar value and show similar distributions in all samples. The boxplots below show the protein or gene expression measurement distribution across samples in each data table, allowing for qualitative assessment of the normalized data. Samples in input order are indicated on the X axis. The Y axis shows log2 transformed protein or gene values. Samples from different classes are coded with different colors.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "features_quant_distribution") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
To quantify the normalization effect, we tested for how well the data in the feature set can distinguish between each pair of samples. If the distribution is similar for the two samples in a given pair, the overall feature abundance (levels for all features in one sample vs the other) should not be sufficient to predict which sample is which. Therefore, for each pair of samples, an AUROC test was performed to quantify the ability of feature abundance to distinguish the two samples, and then a data_dist_similarity score was generated: 1-2*abs(AUROC-0.5). This score ranges from 0 to 1, and the higher the score is the better the normalized data quality is (no systematic difference between the two samples). The final metric for each data table is the median of scores from all sample pairs. The column 'n' shows the total number of sample pairs in the analysis.
if("quant_median_metric" %in% names(final_res$basic_metrics)){ kable(final_res$basic_metrics$quant_median_metric %>% dplyr::rename(data_dist_similarity=quant_median_metric) %>% dplyr::rename(`data table`=dataSet), "html",escape = FALSE, digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.max(final_res$basic_metrics$quant_median_metric$quant_median_metric), bold = T, color = "red") }
The density plots below show the expression distributions for all samples (separate line) in each data table. The Y axis shows the density over the range of log2 transformed protein or gene expression values (X axis).
knitr::include_graphics(final_res$basic_metrics$density_plot %>% normalizePath)
if(n_batch >= 2){ if(!is.null(final_res$batch_effect_metrics$kbet)){ cat("## Batch effect evaluation using kBET\n\n") cat("<p>In this section, we used k-nearest neighbour batch effect test (kBET) for quantification of batch effects. First, the algorithm creates k-nearest neighbour matrix and choses 10% of the samples to check the batch label distribution in its neighbourhood. If the local batch label distribution is sufficiently similar to the global batch label distribution, the $\\chi^2$-test does not reject the null hypothesis (that is 'all batches are well-mixed'). Finally, the result of kBET is the average test rejection rate. The lower the test result, the less bias is introduced by the batch effect.</p>\n") kbet_res <- final_res$batch_effect_metrics$kbet kable(kbet_res$table, "html",digits = 4,escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) } }
if(n_batch >= 2){ cat("## Silhouette width\n\n") cat("<p>The silhouette width s(i) ranges from –1 to 1, with s(i) -> 1 if two clusters are separate and s(i) -> −1 if two clusters overlap but have dissimilar variance. If s(i) -> 0, both clusters have roughly the same structure. Thus, we use the absolute value |s| as an indicator for the presence or absence of batch effects (the greater |s| is, the higher the batch effect is). This analysis is done using the function <b>batch_sil</b> from the R package <a href='https://github.com/theislab/kBET' target='_blank'><b>kBET</b></a>.</p>\n") sil_res <- final_res$batch_effect_metrics$sil show_sil <- data.frame(dataSet=names(sil_res),silhouette_width=sil_res) %>% dplyr::rename(`data table`=dataSet) rownames(show_sil) <- NULL kable(show_sil, "html",digits = 4,escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.min(abs(show_sil$silhouette_width)), bold = T, color = "red") }
if(n_batch >= 2){ cat("## PCA with batch annotation\n\n") cat("<p>For each principal component (PC) from PCA, we calculate the Pearson's correlation coefficient for that PC with batch covariate b:</p>\n") cat("<p style='text-align: center;'>r<sub>i</sub> =corr(PC<sub>i</sub>,b)</p>\n") cat("<p>In a linear model with a single dependent, as is the case here for correlation of a given PC to a batch covariate, the coefficient of determination for batch b on PC<sub>i</sub>, R<sup>2</sup>, is the squared Pearson's correlation coefficient:</p>\n") cat("<p style='text-align: center;'>R<sup>2</sup>(PC<sub>i</sub>,b) = r<sub>i</sub><sup>2</sup></p>\n") cat("<p>The table below shows correlation coefficients for each PC for the first 10 PCs in each data table. The significance of the correlation coefficient was estimated either with a t-test or a one-way ANOVA. R<sup>2</sup> values highlighted with red indicate significant correlation (p-value <= 0.05) between batch and the corresponding PC. This analysis is done using the function <b>pcRegression</b> from the R package <a href='https://github.com/theislab/kBET' target='_blank'><b>kBET</b></a>.</p>\n") pcr_res <- final_res$batch_effect_metrics$pcr kable(pcr_res$table, "html",digits = 4,escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) }
if(n_batch >= 2){ cat("<p>The percentage of variance explained for each PC is shown in the table below:</p>\n") kable(pcr_res$explained_var, "html",escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) }
if(n_batch >= 2){ cat("<p>Greater batch effect is more likely to be present when a PC that explains a higher percentage of variance shows significant correlation with the batch covariate. Therefore, we use the 'Scaled PC regression' metric (pcRegscale), i.e. the total variance of PCs which correlate significantly with batch covariate (FDR<0.05) scaled by the total variance of",pcr_res$top_pc,"PCs, to quantify the batch effect:</p>\n") kable(pcr_res$pcRegscale %>% dplyr::rename(`data table`=dataSet), "html",escape = FALSE, digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.min(pcr_res$pcRegscale$pcRegscale), bold = T, color = "red") }
if(n_batch >= 2){ #cat("## Batch effect evaluation using pca score plot\n\n") cat("The figures below show the PCA score plots for the top three PCs for each data table. Samples from different batches are coded with different colors in the plots.\n") fig <- final_res$pca_batch_plot$fig knitr::include_graphics(fig %>% normalizePath) }
if(n_batch >= 2){ if(!is.null(final_res$pca_batch_plot_13)){ fig <- final_res$pca_batch_plot_13$fig knitr::include_graphics(fig %>% normalizePath) } }
Another way to qualitatively assess batch effect is to visualize the correlations for measurements between samples from the same batch to those in samples from different batches using heatmaps. The following figures show Spearman correlation heatmaps for all pairs of samples (all samples included in both rows and columns) for each data table. The color indicates the correlation between samples. The samples are ordered by batches. Concentration of high correlation values (red color) for pairs of samples from the same batch block compared to other batches indicates the presence of batch effect.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "sample_wise_cor_heatmap") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
if(!is.null(final_res$network_table)){ cat("## Correlation among protein complex members\n") cat("<p>Members of the same protein complex often show greater correlation in gene and protein expression (IntraComplex correlation) than genes or proteins that are in different complexes (InterComplex correlation). Thus, one way to evaluate the quality of the biological signal present in a data table is to compare IntraComplex correlation to InterComplex correlation. Furthermore, because of the need to preserve stoichiometry between protein complex members, the difference between IntraComplex correlation and InterComplex correlation is often greater at the protein level than at the RNA data. If both RNA and protein data tables are available, observing that this difference is more pronounced in the protein data table than the RNA data table serves as an indicator for the quality of the protein data. We use the protein complexes from the <a href='https://mips.helmholtz-muenchen.de/corum/' target='_blank'>CORUM</a> database in this analysis.</p>\n<p>The boxplots below show the distributions and ranges for pairwise correlations between genes or proteins from the same complex and for genes and proteins from different complexes for each data table.</p>\n") include_graphics(final_res$network_table$network_boxplot %>% normalizePath) }
if(!is.null(final_res$network_table)){ cat("<p>The table below shows a summary of the evaluation. 'diff' is Cor(intra) - Cor(inter). 'complex_auc' is the AUROC value based on correlation of protein pairs from different groups.</p>\n") final_res$network_table$cor <- final_res$network_table$cor %>% dplyr::rename(complex_auc=ks) kable(final_res$network_table$cor %>% dplyr::rename(`data table`=dataSet), "html",digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F)%>% row_spec(which.max(final_res$network_table$cor$complex_auc), bold = T, color = "red") }
if(!is.null(final_res$fun_pred)){ cat("## Gene function prediction\n") cat("<p>Previous studies have shown that expression correlation is often higher for functionally related genes or proteins than for unrelated genes or proteins and that this correlation is greater when considering protein data than when considering RNA data (<a href='https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5217778/' target='_blank'>Wang, Jing, et al. Molecular & Cellular Proteomics 16.1 (2017): 121-134.</a>). Therefore, we can also evaluate the biological signal present in a data table by evaluating functional category predictions made using a co-expression network generated from each data table.</p>\n") cat("<p>In this evaluation, each data table was used to build a co-expression network. For a selected network and a selected functional category (such as a selected category from GO or KEGG), proteins/genes annotated to the category and also included in the network were defined as a positive protein/gene set, and other proteins/genes in the network constituted the negative protein/gene set for the category. For a selected functional category, a subset of the proteins/genes were used as seed proteins/genes for random walk through the network to calculate scores for other proteins/genes. A higher score for a protein/gene represents a closer relationship between the protein/gene and the seed proteins/genes. The table below shows AUROCs of the prediction performance using this score for each selected functional category.</p>\n") kable(final_res$fun_pred$table, "html",digits = 3,escape = FALSE) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% scroll_box(height = "400px",width = "100%") }
if(!is.null(final_res$fun_pred)){ if(!is.null(final_res$fun_pred$fig)){ cat("<p>The rank boxplots below summarize the relative performance of the data tables in the functional prediction analysis. For each functional category, a rank is assigned to each data table based on its AUROC compared to the other data tables, where the best functional prediction rank is 1 and the poorest rank is the number of data tables.</p>\n") include_graphics(final_res$fun_pred$fig %>% normalizePath) } }
if(!is.null(final_res$fun_pred)){ if(!is.null(final_res$fun_pred$two_group_fig)){ cat("<p>Comparison of each protein (RNA) data table to a designated RNA (protein) data table is also summarized in the scatter plots below. For each point, the AUROC for a given category in the RNA data is plotted on the X-axis whereas the corresponding AUROC in the protein data table is plotted on the Y-axis. The number of categories for which the protein data table outperforms the RNA data table (AUROC(protein) > 1.1 * AUROC (RNA); red dots) and vice versa (AUROC(RNA) > 1.1 * AUROC (protein); blue dots) are also shown.</p>\n") figs <- sapply(final_res$fun_pred$two_group_fig, function(x){x}) figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") } } }
if(!is.null(final_res$ml)){ cat("## Sample class prediction\n") cat("<p>OmicsEV also allows for assessment of how well each data table can predict a user specified class for each sample. For each data table, machine learning models are built to predict sample class: ",paste(final_res$ml$class_group,collapse = ","),". In OmicsEV, random forest models are built, and the models are evaluated using repeated 5 fold cross validation (",final_res$input_parameters$n_repeats_for_ml," times). Please note, depending on the class specified, this metric may or may not provide an indication of data quality. The results of AUROC analysis performed using the models are summarized in the table and boxplots below.</p>\n",sep = "") kable(final_res$ml$table, "html",digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.max(final_res$ml$table$mean_ROC), bold = T, color = "red") }
if(!is.null(final_res$ml)){ if(!is.null(final_res$ml$fig)){ include_graphics(final_res$ml$fig %>% normalizePath) } }
Another approach for assessing how well each data table can distinguish between classes is to determine how well each class can be separated by principal component analysis (PCA). In PCA score plots for each data table below, each point is a sample that is colored by class and that has a shape reflecting the batch. For a given sample, the PC2 score is plotted on the Y-axis whereas the PC1 score is plotted on the X-axis. Ellipses highlighting clusters of samples in each class are colored by corresponding class, and the separation between these ellipses indicates how well the variances captured by the first two PCs can distinguish between samples from different classes.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "pca_with_batch") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
Unsupervised hierarchical clustering can reveal patterns in the data (clusters of genes or samples that behave more similarly to each other than to other genes or samples). Each heatmap below shows the results of hierarchical clustering for a given data table using ComplexHeatmap
. Genes/proteins are in rows, while samples are in columns and labeled with corresponding class to visualize any potential associations between classes and clusters.
figs <- get_metrics(final_res$basic_metrics$datasets, metric = "cluster_heatmap") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") }
if(!is.null(final_res$snr)){ cat("## Noise to signal analysis\n") cat("<p>Noise to signal analysis.</p>\n") kable(final_res$snr$table, "html",digits = 3) %>% kable_styling(bootstrap_options = "striped", full_width = F)%>% row_spec(which(final_res$snr$table$nsr <= sort(final_res$snr$table$nsr,decreasing = FALSE)[3]), bold = T, color = "red") }
if(!is.null(final_res$snr)){ if(!is.null(final_res$snr$fig)){ include_graphics(final_res$snr$fig %>% normalizePath) } }
if(!is.null(final_res$snr)){ figs <- unlist(final_res$snr$mad_figs) figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") } }
cv_table <- get_cv_table(final_res$basic_metrics$datasets,"cv_stat") if("QC" %in% cv_table$class){ cat("# Platform reproducibility\n") cat("## Coefficient of variation distribution\n") cat("<p>Platform reproducibility is evaluated based on coefficient of variation distribution of QC samples.</p>\n") show_cv_table <- cv_table %>% dplyr::filter(class=="QC") %>% dplyr::select(dataSet,class,median_cv,cv30) %>% dplyr::rename(`data table`=dataSet) %>% dplyr::rename(median_CV=median_cv,CV30=cv30) kable(show_cv_table, "html",digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F)%>% row_spec(which.max(show_cv_table$CV30), bold = T, color = "red") }
if("QC" %in% cv_table$class){ figs <- get_metrics(final_res$basic_metrics$datasets, metric = "cv_distribution") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") } }
The concordance between the protein data and RNA data can be used to assess data quality when both RNA and protein data tables are available. Here, we evaluate gene- and sample-wise correlations between the protein and RNA data tables.
if(!is.null(final_res$input_parameters$x2)){ cat("## Gene-wise mRNA-protein correlation\n") cat("<p>The table below shows the number of genes with measurements (n) in each data table as well as the median of all gene-wise Spearman correlations between mRNA and protein measurements. The columns n5, n6, n7 and n8 show the number of genes with correlation greater than 0.5, 0.6, 0.7 and 0.8, respectively.</p>\n") fw_table <- final_res$protein_rna$feature_wise_cor_table %>% dplyr::rename(gene_wise_cor=median_cor,`data table`=dataSet) kable(fw_table, "html",digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F) %>% row_spec(which.max(fw_table$gene_wise_cor), bold = T, color = "red") }
if(!is.null(final_res$input_parameters$x2)){ if(!is.null(final_res$protein_rna$gene_wise_cor_boxplot_fig)){ cat("<p>Spearman correlation results are also shown for each gene/protein in the boxplot below.</p>\n") include_graphics(final_res$protein_rna$gene_wise_cor_boxplot_fig %>% normalizePath) } }
if(!is.null(final_res$input_parameters$x2)){ if(!is.null(final_res$protein_rna$gene_wise_cor_cdf_fig)){ cat("<p>Another way to visualize the differences between the distributions of all gene-wise RNA-protein correlations is with the cumulative distribution function (CDF) plot shown below. Here each line shows the cumulative distribution for the gene-wise correlations. The further the distribution function is shifted to the right, the more highly correlated the RNA-protein data is.</p>\n") include_graphics(final_res$protein_rna$gene_wise_cor_cdf_fig %>% normalizePath) } }
if(!is.null(final_res$input_parameters$x2)){ cat("<p>The histograms below provide another way to visualize the distribution of correlations for each protein (or RNA) data table with the RNA (or protein) data. Here the bars showing binned frequencies of positive correlations are in red, while negative correlations are shown in the blue bins, and summary statistics are also provided.</p>\n") figs <- get_metrics(final_res$protein_rna$data, metric = "hist_fig") figs <- get_full_path(figs) for(i in 1:length(figs)){ knitr::include_graphics(figs[i] %>% normalizePath) cat(names(figs)[i]) cat("![](",figs[i],")") } }
if(!is.null(final_res$input_parameters$x2)){ cat("## Sample-wise mRNA-protein correlation\n") cat("<p>Sample-wise RNA-protein correlations are summarized in the table below as the median of Spearman correlations for matched protein and RNA data from all pairs of samples for each data table, while the violin plots below show the distributions of these correlations for each data table.</p>\n") gw_table <- final_res$protein_rna$sample_wise_cor_table %>% dplyr::rename(sample_wise_cor=median_cor,`data table`=dataSet) kable(gw_table, "html",digits = 4) %>% kable_styling(bootstrap_options = "striped", full_width = F)%>% row_spec(which.max(gw_table$sample_wise_cor), bold = T, color = "red") }
if(!is.null(final_res$input_parameters$x2)){ include_graphics(final_res$protein_rna$sample_wise_cor_fig %>% normalizePath) }
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.