benchmark_httk: Assess the current performance of httk relative to historical...

View source: R/benchmark_httk.R

benchmark_httkR Documentation

Assess the current performance of httk relative to historical benchmarks

Description

The function performs a series of "sanity checks" and predictive performance benchmarks so that the impact of changes to the data, models, and implementation of the R package can be tested. Plots can be generated showing how the performance of the current version compares with past releases of httk.

Usage

benchmark_httk(
  basic.check = TRUE,
  calc_mc_css.check = TRUE,
  in_vivo_stats.check = TRUE,
  tissuepc.check = TRUE,
  suppress.messages = TRUE,
  make.plots = TRUE
)

Arguments

basic.check

Whether to run the basic checks, including uM and mg/L units for calc_analytic_css, calc_mc_css, and solve_pbtk as well as the number of chemicals with sufficient data to run the steady_state model (defaults to TRUE)

calc_mc_css.check

Whether to check the Monte Carlo sample. A comparison of the output of calc_mc_css to the SimCyp outputs reported in the Wetmore et al. (2012,2015) papers is performed. A comparison between the output of calc_analytic_css (no Monte Carlo) to the median of the output of calc_mc_css is also performed. (defaults to TRUE)

in_vivo_stats.check

Whether to compare the outputs of calc_mc_css and calc_tkstats to in vivo measurements of Css, AUC, and Cmax collected by Wambaugh et al. (2018). (defaults to TRUE)

tissuepc.check

Whether to compare the tissue-specific partition coefficient predictions from the calibrated Schmitt (2008) model to the in vivo data-derived estimates compiled by Pearce et al. (2017). (defaults to TRUE)

suppress.messages

Whether or not output messages are suppressed (defaults to TRUE)

make.plots

Whether current benchmarks should be plotted with historical performance (defaults to TRUE)

Details

Historically some refinements made to one aspect of httk have unintentionally impacted other aspects. Most notably errors have occasionally been introduced with respect to units (v1.9, v2.1.0). This benchmarking tool is intended to reduce the chance of these errors occurring in the future.

Past performance was retroactively evaluated by manually installing previous versions of the package from https://cran.r-project.org/src/contrib/Archive/httk/ and then adding the code for benchmark_httk at the command line interface.

The basic tests are important – if the output units for key functions are wrong, not much can be right. Past unit errors were linked to an incorrect unit conversions made within an individual function. Since the usage of convert_units became standard throughout httk, unit problems are hopefully less likely.

There are two Monte Carlo tests. One compares calc_mc_css 95th percentile steady-state plasma concentrations for a 1 mg/kg/day exposure against the Css values calculated by SimCyp and reported in Wetmore et al. (2012,2015). These have gradually diverged as the assumptions for httk have shifted to better describe non-pharmaceutical, commercial chemicals.

The in vivo tests are in some ways the most important, as they establish the overall predictability for httk for Cmax, AUC, and Css. The in vivo statistics are currently based on comparisons to the in vivo data compiled by Wambaugh et al. (2018). We see that when the tissue partition coefficient calibrations were introduced in v1.6 that the overall predictability for in vivo endpoints was reduced (increased RMSLE). If this phenomena continues as new in vivo evaluation data become available, we may need to revisit whether evaluation against experimentally-derived partition coefficients can actually be used for calibration, or just merely for establishing confidence intervals.

The partition coefficient tests provide an important check of the httk implementation of the Schmitt (2008) model for tissue:plasma equilibrium distribution. These predictions heavily rely on accurate description of tissue composition and the ability to predict the ionization state of the compounds being modeled.

Value

named list, whose elements depend on the selected checks

basic A list with four metrics: N.steadystate -- Number of chemicals with sufficient data for steady-state IVIVE calc_analytic.units -- Ratio of mg/L to uM * 1000 / molecular weight -- should be 1 calc_mc.units -- Ratio should be 1 solve_pbtk.units -- Ratio should be 1
calc_mc_css A list with four metrics: RMSLE.Wetmore -- Root mean squared log10 error (RMSLE) in predicted Css between literature values (SimCyp, Wetmore et al. 2012,2015) and calc_mc_css N.Wetmore -- Number of chemicals in Wetmore comparison RMSLE.noMC -- RMSLE between calc_analytic_css and calc_mc_css N.noMC -- Number of chemicals in noMC comparison
in_vivo_stats A list with two metrics: RMSLE.InVivoCss -- RMSLE between the predictions of calc_analytic_css and in vivo estimates of Css N.InVivoCss -- Number of chemicals in comparison
units.plot A ggplot2 figure showing units tests of various functions. Output is generated for mg/L and uM, and then the ratio mg/L/uM*1000/MW is calculated. If the units are correct the ratio should be 1 (within the precision of the functions -- usually four significant figures).
invivo.rmsle.plot A ggplot2 figure comparing model predictions to in vivo measured values. Output generated is the root mean square log10 error for parameters estimated by the package.
model.rmsle.plot A ggplot2 figure comparing various functions values against values predicted by other models (chiefly SimCyp predictions from Wetmore et al. 2012 and 2015. Output generated is the root mean square log10 error for parameters estimated by the package.
count.plot A ggplot2 figure showing count of chemicals of various functions. Output generated is a count of the chemicals available for the each of the parameters estimated by and used for benchmarking the package.

Author(s)

John Wambaugh

References

\insertRef

DavidsonFritzUnpublishedModelAddinghttk


httk documentation built on Sept. 11, 2024, 9:32 p.m.