if (campsis::onCran()) { cat("This vignette was not built on CRAN. Please check out the online version [here](https://calvagone.github.io/campsis.doc/articles/v17_run_simulation_in_parallel.html).") knitr::knit_exit() }
if (campsis::onCI()) { cat("This vignette takes time to build and is excluded from the CI.") knitr::knit_exit() }
library(campsis) library(tictoc) library(ggplot2) library(dplyr)
This vignette explains how Campsis can benefit from several CPU cores for running your simulation efficiently.
model <- model_suite$pk$`2cpt_fo_mm` dataset <- Dataset(1000) %>% add(Bolus(time=0, amount=100, ii=12, addl=10)) %>% add(Observations(times=0:(11*12)))
Sequential run:
tic() results <- simulate(model=model, dataset=dataset, dest="mrgsolve", seed=1, outfun=~PI(.x, output="CONC"), replicates=100) toc()
Using 6 cores:
settings <- Settings(Hardware(cpu=6, replicate_parallel=TRUE))
tic() results <- simulate(model=model, dataset=dataset, dest="mrgsolve", seed=1, outfun=~PI(.x, output="CONC"), replicates=10, settings=settings) toc()
vpcPlot(results)
Since there is no uncertainty on the parameters and since 1000 patients are simulated in each replicate, the confidence intervals on the percentiles are very low.
Instead of giving a fixed dose, let's make the dose dynamic using a DOSE
covariate:
dataset_ <- dataset %>% add(Covariate("DOSE", 0)) %>% add(DoseAdaptation("DOSE"))
Let's create one scenario per dose to be tested:
scenarios <- Scenarios() for (dose in c(0.5,1,5,10,50,100,500,1000,5000,10000,50000,100000)) { scenarios <- scenarios %>% add(Scenario(name=paste0("DOSE=", dose), dataset=dataset_ %>% replace(Covariate("DOSE", dose)))) }
Sequential run:
tic() results <- simulate(model=model, dataset=dataset_, dest="mrgsolve", seed=1, scenarios=scenarios) toc()
Using 6 cores (plan is auto-setup):
settings <- Settings(Hardware(cpu=6, scenario_parallel=TRUE))
tic() results <- simulate(model=model, dataset=dataset_, dest="mrgsolve", seed=1, scenarios=scenarios, settings=settings) toc()
Using 6 cores (plan already setup from last execution):
settings <- Settings(Hardware(cpu=6, scenario_parallel=TRUE, auto_setup_plan=FALSE))
tic() results <- simulate(model=model, dataset=dataset_, dest="mrgsolve", seed=1, scenarios=scenarios, settings=settings) toc()
Plot results:
# Re-arrange scenarios by natural order results <- results %>% mutate(SCENARIO=factor(SCENARIO, levels=unique(SCENARIO))) shadedPlot(results, "CONC", "SCENARIO") + facet_wrap(~SCENARIO, ncol=4, scales="free_y") + guides(colour="none", fill="none")
In some particular cases, when the number of subjects is important and/or the dataset is complex, it may be interesting to export the dataset in parallel. Let's illustrate this feature using a dataset that contains a lot of subjects and treatment IOV (which is costly).
dataset_ <- dataset %>% setSubjects(24000) %>% add(IOV("IOV_KA", distribution=NormalDistribution(mean=0, sd=0.1))) # This step is costly
model_ <- model %>% replaceAll("ETA_KA", "ETA_KA + IOV_KA")
Sequential run:
tic() settings <- Settings(Hardware(slice_size=4000)) results <- simulate(model=model_, dataset=dataset_, dest="mrgsolve", seed=1, settings=settings) toc()
Using 6 cores (plan is auto-setup):
settings <- Settings(Hardware(cpu=6, dataset_parallel=TRUE, dataset_slice_size=4000, slice_size=4000))
tic() results <- simulate(model=model_, dataset=dataset_, dest="mrgsolve", seed=1, settings=settings) toc()
Using 6 cores (plan already setup from last execution):
settings <- Settings(Hardware(cpu=6, dataset_parallel=TRUE, dataset_slice_size=4000, slice_size=4000, auto_setup_plan=FALSE))
tic() results <- simulate(model=model_, dataset=dataset_, dest="mrgsolve", seed=1, settings=settings) toc()
Plot results:
shadedPlot(results, "CONC")
It may be useful to simulate slices in parallel when the simulation time per individual is high (e.g. long simulation). To illustrate this topic, let's dose and observe a drug over a 6-month period.
dataset_ <- Dataset(200) %>% add(Bolus(time=0, amount=100, ii=12, addl=364)) %>% add(Observations(times=0:(365*12)))
Sequential run (rxode2
) :
tic() results <- simulate(model=model, dataset=dataset_, dest="rxode2", seed=1) toc()
Sequential run (mrgsolve
) :
tic() results <- simulate(model=model, dataset=dataset_, dest="mrgsolve", seed=1) toc()
Using 6 cores (rxode2
, plan is auto-setup):
settings <- Settings(Hardware(cpu=6, slice_parallel=TRUE))
tic() results <- simulate(model=model, dataset=dataset_, dest="rxode2", seed=1, settings=settings) toc()
Using 6 cores (mrgsolve
, plan is auto-setup):
settings <- Settings(Hardware(cpu=6, slice_parallel=TRUE))
tic() results <- simulate(model=model, dataset=dataset_, dest="mrgsolve", seed=1, settings=settings) toc()
Please note that the default slice size for mrgsolve
is 500, which means that only 1 core was used in the previous example.
For rxode2
, which works with a much smaller slice size of 6 subjects, paralleling slices may be interesting for long simulations, as demonstrated here.
As you may have seen in the examples above, the argument auto_setup_plan
is TRUE
by default. This means, that Campsis, will setup your hardware plan every time you call the simulate
method. This comes at a high cost since the setup phase can take some time.
For repeated simulations, it may be useful to setup your plan manually once for all. To do so, you can call setupPlanDefault
on your settings or hardware object:
settings <- Settings(Hardware(cpu=2, replicate_parallel=TRUE, auto_setup_plan=FALSE)) setupPlanDefault(settings)
If you do so, please don't forget to set the argument auto_setup_plan
to FALSE, when you give these settings to the simulate
method, otherwise your plan is setup again...
For going back to the sequential execution (1 CPU core), you can call:
setupPlanSequential()
The targets package is a Make-like pipeline tool for Statistics and data science in R. With targets, you can maintain a reproducible workflow without repeating yourself. It is the ideal pipeline tool to use for medium to large simulation projects with Campsis.
This package supports parallelisation and therefore can complete your simulation workflow faster. The following script (_targets.R
) gives you a brief overview of how you can run several simulation targets in parallel.
library(targets) source("R/my_script.R") # Packages loaded by targets packages <- c("campsis", "progressr", "future") tar_option_set(packages=packages, storage="worker", retrieval="worker") # Prepare 3 workers future::plan(future::multisession, workers=3) list( tar_target( model, model_suite$pk$`1cpt_zo_abs_lag`, deployment="main" ), tar_target( dataset, makeDataset(subjects=1000, dose=1000), deployment="main" ), tar_target( results1, runSimulation(model=model, dataset=dataset, replicates=10, seed=1), deployment="worker" ), tar_target( results2, runSimulation(model=model, dataset=dataset, replicates=10, seed=2), deployment="worker" ), tar_target( results3, runSimulation(model=model, dataset=dataset, replicates=10, seed=3), deployment="worker" ) )
This small project can be downloaded by clicking the link below.
xfun::embed_dir('resources/Targets_Squeletton_Parallel/', text="Download targets squeletton")
To run this project, nothing is easier. Open an R console within your targets folder and type:
library(targets) tar_make_future(workers=3)
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.