library(NMdata)
## library(devtools)
## load_all("~/wdirs/NMdata")
NMdataConf(as.fun="data.table")

##knitr::opts_chunk$set(dev = "cairo_pdf")
knitr::opts_chunk$set(
                      collapse = TRUE
                     ,comment = "#>"
                     ,fig.width=7
                     ,cache=FALSE
                  )

## NMdataConf(dir.psn="/opt/psn")

library(data.table)
library(knitr)
library(scales)
library(NMcalc)

Parameter tables are a basic and essential look into a model estimate. Due to Nonmem's flexible model definition approach, getting an informative parameter table can however be an involved process.

A manual or even semi-automated approach to establishing these relationships are manually involved to quality check.

NMdata offers two distinct approaches to connecting Nonmem parameters to physiologially interpretable ones. Which one will be most useful depends on your work style and how you organize your control streams.

Notice, this vignette zooms in on how to read, label and format "parameter estimates". It does not deal with reading input and output data (see the NMscanData() vignette). Also, it does not go into how to get a nice print of the table in a report or a slide deck. For that next step, I have personally found R packages flextable and pmtables useful. The scope is just to collect the information.

NMdata contains the following functions to read estimates from Nonmem:

NMreadExt() is central to the topic of this vignette, and NMreadShk() is also used. Another interesting function is NMrelate() which processes the control stream code to automatically identify parameter names and relate them to parameters (such as CL to $THETA(1) and $OMEGA(1,1)). Read on, if interested.

Reading Nonmem parameter estimates

Before getting into how to translate the Nonmem parameters, it is worth mentioning how to even just read the parameter estimates with NMdata. A lot of the relevant information is stored in the .ext file and can be read using NMreadExt().

file.mod <- system.file("examples/nonmem/xgxr133.mod",package="NMdata")

pars.ext <- NMreadExt(file.mod)

This does provide a basic parameter table.

pars.ext[,.(par.name,FIX,value,se)] 

NMreadExt() does return more than that. These are all the columns available:

colnames(pars.ext)

where est is a deprecated copy of the value column.

NMreadExt() can return other information than the parameter estimates. This vignette is about creating parameter tables so for now, let this be a teaser for what else NMreadExt() provides (which is all the info in .ext files):

names(NMreadExt(file.mod,return="all"))

Retrieve physiologically interpretable labels using parameter section annotations

The above parameter table is not useful to many, maybe not even to the person who wrote the model. The parameter sections of the control stream contain consistently formatted information we can use.

NMreadSection(file.mod)[c("THETA","OMEGA","SIGMA")]
quiet <- NMreadSection(file.mod)[c("THETA","OMEGA","SIGMA")] |> lapply(function(x)cat(paste(paste(x,collapse="\n"),"\n\n")))

NMdata::NMreadParsText() provides a very flexible way to read this information into R. One has to provide the format used in the control stream in a simple way. Take a look at the annotations shown above. Following an optional $THETA/$OMEGA/$SIGMA, each line contains the initial value specification used by Nonmem, and then comments containing a counter, a variable name (symbol), and a label string. $THETA also includes units and transformation coding (log means the physiological parameter is the exponentail of the $THETA). SIGMA is coded simpler, only holding a symbol and a label separated by a dash. We mimick that in the format arguments to NMreadParsText().

partab <- NMreadParsText(file.mod,format="%init;%idx:%symbol;%label[%unit];%trans",
                         format.sigma="%init;%symbol-%label",
                         as.fun="data.table")
partab

Notice a few things here.

Non-alphanumeric characters can be used as delimiters. In this case ; is used as the delimiter most of the time, but not concistently. After the counter a : was used. And in fact, the brackets around the units ([1/h]) are delimiters too. In this case [ is a delimiter on its own, and ] form a multi-charater delimitor together with the following ;.

We did not specfiy any spaces, and spaces around delimitors are ignored altogeter by default. You musst use the spaces.split argument if you want to respect spaces as delimitors.

In fact, we did not specify two distinct formats for $THETA and $OMEGA even though $OMEGA lines do no include units and transformations. NA's are filled in where information is missing. However, you cannot skip delimitors in a line. You can skip contents, but the structure must be consistent. As an example, see how the unit is skipped on THETA(A) - AGEEFF.

format always applies to $THETA and is inherited to use for $OMEGA if format.omega is not supplied. The format used for $OMEGA is inherited for $SIGMA if format.sigma is not provided.

In this example, the correlation between two $OMEGA elements is estimated. NMreadParsText() needs the off-diagonal elements in $OMEGA and/or $SIGMA to be on their own lines to connect comments to them. If a diagonal element and an off-diagonal element is on the same line, NMreadParsText() will assign the comments to the diagonal element. It may not be worth adding comments to off-diagonal elements because they are simply covariances of already labeled diagonal elements. A simple postprocessing step can therefore be used to add them.

See more examples in ?NMreadParsText() to better understand how you can tweak the formats to match your needs.

Merge parameter estimates and parameter labels

And we can format a parameter table with labels merging with the .ext file:

partab2 <- mergeCheck(pars.ext,
           partab,
           by="parameter",
           all.x=TRUE,
           common.cols="drop.y",
           quiet=TRUE
           )
partab2[,value.trans:=value]
partab2[trans=="log",value.trans:=exp(value)]
partab2[,Estimate:=NMcalc::signif2(value.trans,3)][
   ,RSE:=scales::percent(se/value,accuracy=.1)]
partab.print <- partab2[,.(par.name,symbol,FIX,label,unit,Estimate,RSE=RSE)]
partab.print

The left join we did with mergeCheck includes all parameters from the .ext file. In this case we want to skip all the off-diagonals in $OMEGA and $SIGMA because they are not estimated. Let's just look at the ones that were estimated. In many cases some fixed parameters can be relevant to include in a parameter table, and then you need to modify this step.

partab.print <- partab.print[FIX==0,.(par.name,symbol,label,unit,Estimate,RSE)] 

We need to explain that RSE's were calculated on the log scale for the transformed parameters. We could also want to add confidence intervals based on the estimated standard error on the log scale. But we are getting close.

Add shrinkage

Shrinkage estimates can be taken from the .ext file using NMreadShk(). Notice, depending on the Nonmem version multiple different shrinkage estimates are provided (and fetched by NMreadShk). Here, we just include the SD-based \\eta-shrinkages.

shk <- NMreadShk(file.mod)

partab.print2 <- mergeCheck(partab.print,shk[,.(par.name,Pval=signif2(Pval,2),etabar,EtaShkSD=percent(EtaShkSD/100))],by="par.name",all.x=TRUE)
partab.print2

Correlation estimates

A simple function addCor() is available to calculate estimated correlations of random effects. addCor() simply applies stats::cov2cor() to get the correlations from the variance-covariance matrices $SIGMA and $OMEGA.

Partab4 <- NMdata::addCor(partab2)
Partab4[FIX==0,.(par.name,symbol,FIX,label,unit,Estimate,RSE=RSE,corr)]

Adding initial values and bounds

Use the function NMreadInits() to break down the initial values, bounds and "FIX" information as shown in the control stream. This can easily be merged onto the parameter table as needed.

NMreadInits(file.mod)

Editing information in the parameter table

The problem with the otherwise nicely looking table generated above is that we spotted a typo in one label. A "t" is missing in "Intercompartmental". How are we going to fix that? The model was finalized so we don't want to edit the control stream. For this specific problem we could fix it with a simple text replacement. But such patchwork can become messy with a large model. While the method of providing all the details in control stream may look like a great solution at first, it has problems

A practical issue is the tedious process keeping all the candidate models up to date as the information for the parameter tables improves. The parameter table work is often part of a model reporting step following model development. Since the control streams are created in the model development step, the parameter table fine tuning is difficult to integrate. Briefly, we need a way to update the parameter tables after finalizing the control stream.

### dont do this - csv has been edited
## tab.csv <- partab[,.(symbol,label,unit)]
## fwrite(tab.csv,file="derived/partab_offline.csv")
tab.offline <- read.csv("derived/partab_offline.csv")
mergeCheck(partab2[FIX==0],tab.offline,by="symbol",common.cols="drop.x",all.x=T)

Identifying the parameter names from $PK or $PRED code

In the last example, all we used from the control stream was the symbol, i.e. an abbreviated parameter name. The rest was merged in from a table maintained in a csv file.

labs.auto <- NMrelate(file.mod)
labs.auto[,symbol:=label]
labs.auto[par.type=="OMEGA",symbol:=paste0("BSV.",symbol)]
labs.auto

mergeCheck(labs.auto[,.(par.name,symbol)],tab.offline,by="symbol",all.x=T)
file2.mod <- system.file("examples/nonmem/xgxr133.mod",package="NMdata")

pars.ext <- NMreadExt(file2.mod)

labs.auto <- NMrelate(file2.mod)
labs.auto[,symbol:=label]
labs.j <- labs.auto[par.type%in%c("OMEGA","SIGMA"),.(par.type.j=par.type,j,symbol.j=symbol)]
labs.ij <- labs.j[,labs.auto[par.type%in%c("OMEGA","SIGMA"),.(i,par.type.i=par.type,symbol.i=symbol)],by=labs.j]
labs.ij <- labs.ij[par.type.i==par.type.j & (i!=j)]
labs.ij[par.type.i==par.type.j,par.type:=par.type.i]
labs.ij[,symbol:=sprintf("Cov.%s-%s",symbol.i,symbol.j)]
labs.ij[,par.name:=sprintf("%s(%d,%d)",par.type.i,i,j)]
labs.ij[,label:=symbol]
cols.drop <- cc(par.type.j, symbol.j, par.type.i, symbol.i)
labs.ij[,(cols.drop):=NULL]

labs.auto[par.type=="OMEGA"&i==j,symbol:=paste0("BSV.",symbol)]
labs.auto2 <- rbind(labs.auto,labs.ij,fill=T)


### overwrite auto-gen labels with hard-coded ones
tab.labs.diag <- mergeCheck(
    ## labs.auto[par.type=="THETA",!("label")]
    labs.auto2[,!("label")]
    ## labs.auto[par.type=="THETA"|(i==j),!("label")]
   ,
    tab.offline
   ,by="symbol"
   ,all.x=T)

tab.labs.offdiag <- labs.auto2[par.type!="THETA"&i!=j]

tab.all.labs <- rbind(
    tab.labs.diag
                      ## ,tab.labs.offdiag
                     ,fill=T)

## mege onto estimates (ext)
pars.aug <- mergeCheck(pars.ext[FIX==0],
                       tab.all.labs[!grepl("SIGMA",symbol)]
                      ,
                       by="par.name",all.x=TRUE,common.cols="drop.y")

### back transformations
pars.aug[,value.trans:=value]
pars.aug[grepl("LTV",symbol),value.trans:=exp(value)]
## pars.aug


pars.aug[,.(par.name,symbol,FIX,label,unit,Estimate=value.trans,RSE=se/value)]

Handling multiple models at once

Any of the code above can be run on multiple models at once. NMdata functions generally return tables including a model column for this purpose. Let's get a quick overview of a few models.

files <- system.file(file.path("examples/nonmem",c("xgxr132.mod","xgxr133.mod")),package="NMdata")
labs.auto <- NMrelate(files)
labs.auto[,symbol:=label]
labs.auto[par.type=="OMEGA",symbol:=paste0("BSV.",symbol)]
labs.auto <- labs.auto[!grepl("SIGMA",symbol)]

NMreadExt(files,return="pars")[FIX==0]|>
    mergeCheck(labs.auto[,.(model,par.name,symbol)],by=c("model","par.name"),all.x=T) |> 
    mergeCheck(tab.offline,by="symbol",all.x=T) |>
    dcast(par.name+symbol+label+unit~model,value.var="value")


philipdelff/NMdata documentation built on March 5, 2025, 12:20 a.m.