Rnmr1D package"

knitr::opts_chunk$set(echo = TRUE, warning=FALSE, fig.align=TRUE)


Preamble

This document is an R vignette available under the CC BY-SA 4.0 license. It is part of the R package Rnmr1D, a free software available under the GPL version 3 or later, with copyright from the Institut National de la Recherche Agronomique (INRA).

A development version of the Rnmr1D package can be downloaded from GitHub. To install it, we can follow the indications in the README.md file.



Features illustration of the Rnmr1D package

Rnmr1D is the main module in the NMRProcFlow web application (nmrprocflow.org)[1] concerning the NMR spectra processing.


Description

Rnmr1D R package is aimed to performs the complete processing of a set of 1D NMR spectra from the FID (raw data) and based on a processing sequence (macro-command file). An additional file specifies all the spectra to be considered by associating their sample code as well as the levels of experimental factors to which they belong.

NMRProcFlow allows experts to build their own spectra processing workflow, in order to become re-applicable to similar NMR spectra sets, i.e. stated as use-cases. By extension, the implementation of NMR spectra processing workflows executed in batch mode can be considered as relevant provided that we want to process in this way very well-mastered and very reproducible use cases, i.e. by applying the same Standard Operating Procedures (SOP). A subset of NMR spectra is firstly processed in interactive mode in order to build a well-suited workflow. This mode can be considered as the 'expert mode'. Then, other subsets that are regarded as either similar or being included in the same case study, can be processed in batch mode, operating directly within a R session.

See the NMRProcFlow online documentation https://nmrprocflow.org/ for further information.


Test with the extra data provided within the package

To illustrate the possibilities of Rnmr1D 1.0, we will use the dataset provided within the package. This is a very small set of 1H NMR spectra (6 samples) acquired on a Bruker Advanced III 500Mz instrument (ZG sequence, solvent D20, pH 6), derived from plant leaves. The experimental design of the study focused on a treatment (stress vs. control) with 3 replicates for each samples (unpublished).

library(Rnmr1D)
data_dir <- system.file("extra", package = "Rnmr1D")
RAWDIR <- file.path(data_dir, "CD_BBI_16P02")
CMDFILE <- file.path(data_dir, "NP_macro_cmd.txt")
SAMPLEFILE <- file.path(data_dir, "Samples.txt")

The samples matrix with the correspondence of the raw spectra, as well as the levels of the experimental factors

samples <- read.table(SAMPLEFILE, sep="\t", header=T,stringsAsFactors=FALSE)
samples

The Macro-commands list for processing

CMDTEXT <- readLines(CMDFILE)
CMDTEXT[grep("^#$", CMDTEXT, invert=TRUE)]


Do the processing

doProcessing is the main function of this package. Indeed, this function performs the complete processing of a set of 1D NMR spectra from the FID (raw data) and based on a processing sequence (macro-command file). An additional file specifies all the spectra to be considered by associating their sample code as well as the levels of experimental factors to which they belong. In this way it is possible to select only a subset of spectra instead of the whole set.

out <- Rnmr1D::doProcessing(RAWDIR, cmdfile=CMDFILE, samplefile=SAMPLEFILE, ncpu=2)


The ouput list includes severals metadata, data and other information.

ls(out)

out$infos


Spectra plots


Stacked Plot with a perspective effect

plotSpecMat(out$specMat, ppm_lim=c(0.5,5), K=0.33)

Overlaid Plot

plotSpecMat(out$specMat, ppm_lim=c(0.5,5), K=0, pY=0.1)

Some other plots to illustrate the possibilities

plotSpecMat(out$specMat, ppm_lim=c(0.5,5), K=0.33, asym=0)
cols<- c( rep("blue",length(out$samples$Treatment)));
cols[out$samples$Treatment=="stress"] <- "red"
plotSpecMat(out$specMat, ppm_lim=c(0.5,5), K=0.67, dppm_max=0, cols=cols)


Apply additional processing

It is possible to apply additionnal processing after the main processing. For that the doProcCmd function can process macro-commands included in a string array to be applied on the spectra set previously generated. In the previous processing, the bucketing has been done with a uniform approach. We are going to change this by an intelligent bucketing [2], more efficient to generate relevant variables as we will further see .

specMat.new <- Rnmr1D::doProcCmd(out, 
     c( "bucket aibin 10.2 10.5 0.2 3 0", "9.5 4.9", "4.8 0.5", "EOL" ), ncpu=2, debug=TRUE)
out$specMat <- specMat.new


Get the data matrix

Before exporting, in order to make all spectra comparable each other, we have to account for variations of the overall concentrations of samples. In NMR metabolomics, the total intensity normalization (called the Constant Sum Normalization) is often used so that all spectra correspond to the same overall concentration. It simply consists to normalize the total intensity of each individual spectrum to a same value.

outMat <- Rnmr1D::getBucketsDataset(out, norm_meth='CSN')
outMat[, 1:10]



Bucket Clustering

Note: The following features are integrated into the BioStatFlow web application the biostatistical analysis companion of NMRProcFlow (the ' Clustering of Variables' analysis in the default workflow).

At the bucketing step (see above), we have chosen the intelligent bucketing [2], it means that each bucket exact matches with one resonance peak. Thanks to this, the buckets now have a strong chemical meaning, since the resonance peaks are the fingerprints of chemical compounds. However, to assign a chemical compound, several resonance peaks are generally required in 1D 1 H-NMR metabolic profiling. To generate relevant clusters (i.e. clusters possibly matching to chemical compounds), we take advantage of the concentration variability of each compound in a series of samples and based on significant correlations that link these buckets together into clusters. Two approaches have been implemented:


Bucket Clustering based on a lower threshold applied on correlations

In this approach an appropriate correlation threshold is applied on the correlation matrix before its cluster decomposition [3]. Moreover, an improvement can be done by searching for a trade-off on a tolerance interval of the correlation threshold : from a fixed threshold of the correlation (cval), the clustering is calculated for the three values (cval-dC, cval, cval+dC), where dC is the tolerance interval of the correlation threshold. From these three sets of clusters, we establish a merger according to the following rules: 1) if a large cluster is broken, we keep the two resulting clusters. 2) If a small cluster disappears, the initial cluster is conserved. Generally, an interval of the correlation threshold included between 0.002 and 0.01 gives good trade-off.

cval=0 => threshold automatically estimated

options(warn=-1)
clustcor <- Rnmr1D::getClusters(outMat, method='corr', cval=0, dC=0.003, ncpu=2)


Bucket Clustering based on a hierarchical tree of the variables (buckets) generated by an Hierarchical Clustering Analysis (HCA)

In this approach a Hierachical Classification Analysis (HCA, hclust) is applied on the data after calculating a matrix distance ("euclidian" by default). Then, a cut is applied on the tree (cutree) resulting from hclust, into several groups by specifying the cut height(s). For finding best cut value, the cut height is chosen i) by testing several values equally spaced in a given range of the cut height, then, 2) by keeping the one that gives the more cluster and by including most bucket variables. Otherwise, a cut value has to be specified by the user (vcutusr)

vcutusr=0 => cut value automatically estimated

options(warn=-1)
clusthca <- Rnmr1D::getClusters(outMat, method='hca', vcutusr=0.12)



Clustering results

The getClusters function returns a list containing the several components, in particular:

clustcor$clustertab[1:20, ]

clustcor$clusters$C7      # same as clustcor$clusters[['C7']]


Based on these clusters, it is possible to find candidate by querying online databases - See for example:


Clustering Comparison resulting of the both 'hca' & 'corr' methods


The'Corr' approach produces fewer clusters but correlations between all buckets are guaranteed to be greater than or equal to the set threshold; unlike the'hca' approach which produces many more clusters and especially small sizes (2 or 3) but which may result from aggregates having a wider range around the threshold of values of correlations between buckets.


Criterion curves

g1 <- ggplotCriterion(clustcor)
#ggplotPlotly(g1, width=820, height=400)
g1
g2 <- ggplotCriterion(clusthca)
#ggplotPlotly(g2, width=820, height=400)
g2

Clusters distribution

These plots allow us to have an insight on the clusters distribution

layout(matrix(1:2, 1, 2,byrow = TRUE))

hist(simplify2array(lapply(clustcor$clusters, length)), 
     breaks=20, main="CORR", xlab="size", col="darkcyan")
mtext("clusters size distribution", side = 3)

hist(simplify2array(lapply(clusthca$clusters, length)), 
     breaks=20, main="HCA", xlab="size", col="darkcyan")
mtext("clusters size distribution", side = 3)

We see that they cover three orders (log scaled) for CORR method and four orders for HCA method

g3 <- ggplotClusters(outMat,clustcor)
#ggplotPlotly(g3, width=820, height=400)
g3
g4 <- ggplotClusters(outMat,clusthca)
#ggplotPlotly(g4, width=820, height=400)
g4


PCA

PCA is a multivariate analysis without prior knowledge on the experiment design. In this way, it allows us to see the dataset as it is, projected in the two main components where data variance is maximal.

pca <- prcomp(outMat,retx=TRUE,scale=T, rank=2)
sd <- pca$sdev
eigenvalues <- sd^2
evnorm <- (100*eigenvalues/sum(eigenvalues))[1:10]


Plot PCA Scores

g5 <- ggplotScores(pca$x, 1, 2, groups=out$samples$Treatment, EV=evnorm , gcontour="polygon")
#ggplotPlotly(g5, width=820, height=450)
g5


Plot PCA Loadings (based on the HCA method)

Having calculated the clustering in the previous step (see above), it can be superimposed on the loadings plot in order to view its distribution and thus its efficiency. Here we choose those based on the HCA method. We can see that where the clusters are located, this corresponds to the maximum variance of the buckets, i.e. mainly at the ends of the first principal component (PC1).

g6 <- ggplotLoadings(pca$rotation, 1, 2, associations=clusthca$clustertab, EV=evnorm, main=sprintf("Loadings - Crit=%s",clusthca$vcrit), gcontour="ellipse" )
#ggplotPlotly(g6, width=820, height=650)
g6


Plot loadings with the only merged variables for each cluster (based on their average)

outMat.merged <- Rnmr1D::getMergedDataset(outMat, clusthca, onlycluster=TRUE)
pca.merged <- prcomp(outMat.merged,retx=TRUE,scale=T, rank=2)
g7 <- ggplotLoadings(pca.merged$rotation, 1, 2, associations=NULL, EV=evnorm)
#ggplotPlotly(g7, width=820, height=650)
g7

Quod erat demonstrandum !




Low-level functionnalities

Rnmr1D package also provides a set of low-level functions allowing to process spectra one by one. It accepts raw data come from three major vendors namely Bruker GmbH, Agilent Technologies (Varian), Jeol Ltd and RS2D. Moreover, we also support the nmrML format [4].

Here is an example : Preprocessing one of the raw spectrum taken from the provided data example set. The term pre-processing designates here the transformation of the NMR spectrum from time domain to frequency domain, including the Fast Fourrier Tranform (FFT) and the phase correction.

data_dir <- system.file("extra", package = "Rnmr1D")
RAWDIR <- file.path(data_dir, "CD_BBI_16P02")

We first initialize the list of preprocessing parameters. The raw spectrum is a FID (Free Induction Decay in the time domain) and was acquired on a Bruker instrument. The processing consistes by applying a line broadening (LB=0.3), then a zerofilling (ZFFAC=4) before applying the FFT (Fast Fourrier Transform). Finally, a phasage has to be applied (Order 0, i.e. OPTPHC0 & Order 1 i.e. OPTPHC1 both equal to TRUE) then a calibration of the ppm scale (TSP=TRUE).

procParams <- Spec1rProcpar
procParams$LOGFILE <- ""
procParams$VENDOR <- 'bruker'
procParams$INPUT_SIGNAL <- 'fid'
procParams$LB <- 0.3
procParams$ZEROFILLING <- TRUE
procParams$ZFFAC <- 4
procParams$BLPHC <- TRUE
procParams$OPTPHC1 <- FALSE
procParams$TSP <- TRUE

We generate the metadata from the list of raw spectra namely the samples and the list of selected raw spectra

metadata <- generateMetadata(RAWDIR, procParams)
metadata

Spec1rDoProc is the function that can preprocess only one raw spectrum at time. We take the first raw spectrum (FID)

ID <- 1
ACQDIR <- metadata$rawids[ID,1]
spec <- Spec1rDoProc(Input=ACQDIR,param=procParams)

It returns a list containing the following components:

ls(spec)

Then, we plot the spectrum in the frequency domain (ppm) in the full range

plot( spec$ppm, spec$int, type="l", col="blue", 
                xlab="ppm", ylab="Intensities", 
                xlim=c( spec$pmax, spec$pmin ), ylim=c(0, max(spec$int/100)) )
legend("topleft", legend=metadata$samples[ID,1])

Zoom in on 3 areas of the spectrum

layout(matrix(c(1,1,2,3), 1, 4, byrow = TRUE))
plot( spec$ppm, spec$int, type="l", col="blue", xlab="ppm", xlim=c( 8, 6 ), 
      ylim=c(0, max(spec$int/1500)), ylab="Intensities" )
legend("topleft", legend=metadata$samples[ID,1])
plot( spec$ppm, spec$int, type="l", col="blue", xlab="ppm", xlim=c( 4.9, 4.7 ),
      ylim=c(0, max(spec$int/10)), ylab="" )
plot( spec$ppm, spec$int, type="l", col="blue", xlab="ppm", xlim=c( 0.1, -0.1 ),
      ylim=c(0, max(spec$int/150)), ylab="" )

You can also see the small application online at the URL https://pmb-bordeaux.fr/nmrspec/.



References

[1] Jacob, D., Deborde, C., Lefebvre, M., Maucourt, M. and Moing, A. (2017) NMRProcFlow: A graphical and interactive tool dedicated to 1D spectra processing for NMR-based metabolomics, Metabolomics 13:36. https://pubmed.ncbi.nlm.nih.gov/28261014/

[2] De Meyer, T., Sinnaeve, D., Van Gasse, B., Tsiporkova, E., Rietzschel, E. R., De Buyzere, M. L., et al. (2008). NMR-based characterization of metabolic alterations in hypertension using an adaptive, intelligent binning algorithm. Analytical Chemistry, 80(10), https://pubmed.ncbi.nlm.nih.gov/18419139/

[3] Jacob D., Deborde C. and Moing A. (2013). An efficient spectra processing method for metabolite identification from 1H-NMR metabolomics data. Analytical and Bioanalytical Chemistry 405(15), https://pubmed.ncbi.nlm.nih.gov/23525538/

[4] Schober D, Jacob D, Wilson M, Cruz JA, Marcu A, Grant JR, Moing A, Deborde C, de Figueiredo LF, Haug K, Rocca-Serra P, Easton J, Ebbels TMD, Hao J, Ludwig C, Günther UL, Rosato A, Klein MS, Lewis IA, Luchinat C, Jones AR, Grauslys A, Larralde M, Yokochi M, Kobayashi N, Porzel A, Griffin JL, Viant MR, Wishart DS, Steinbeck C, Salek RM, Neumann S. (2018) Anal Chem https://pubmed.ncbi.nlm.nih.gov/29035042/



Try the Rnmr1D package in your browser

Any scripts or data that you put into this service are public.

Rnmr1D documentation built on April 14, 2023, 12:34 a.m.