knitr::opts_chunk$set( collapse=TRUE, comment="#>", fig.align="center", fig.width=5.5 )
Applied researchers and statisticians frequently use Monte-Carlo simulation techniques in a variety of ways. They are used to estimate required sample sizes [@Arnold2011], formally compare different statistical methods [@Morris2019; @Denz2023], help design and plan clinical trials [@Kimko2002; @Nance2024] or for teaching purposes [@Sigal2016; @Fox2022]. The main reason for their broad usage is that the researcher has full control over the true data generation process (DGP). In general, the researcher will define a DGP appropriate to the situation and generate multiple datasets from it. Some statistical analysis technique is then applied to each dataset and the results are analyzed. A crucial step in every kind of Monte-Carlo simulation study is thus the generation of these datasets.
Depending on the DGP that is required by the researcher, this step may become very difficult and time consuming. For example, some Monte-Carlo simulations require the generation of complex longitudinal data with variables of different types that are causally related in various ways [@Asparouhov2020]. Some of these possible data types are continuous variables, categorical variables, count variables or time-to-event variables. All of these require different parametrizations and simulation strategies. If interactions or non-linear relationships between these variables are required, simulating data from the DGP becomes even more challenging.
Generating any artificial data requires (1) a formal description of the DGP, (2) the knowledge of an algorithm that may be used to generate the data from this DGP, and (3) the ability to create a software application to implement that algorithm. Although many statisticians may have no problem with theses steps, this might not be the case for more applied researchers. More importantly, the third step in particular may require a high level of expertise in programming, because the resulting program has to be validated extensively while it also has to be computationally efficient enough to allow potentially thousands of datasets to be generated in a reasonable amount of time. Additionally, it also has to be flexible enough to allow the user to easily make changes to the DGP to be useful in most cases [@Sofrygin2017]. A comprehensive software application that automates most of the required work would therefore be of great benefit to the scientific community.
In this article we present the simDAG
R
package, which offers an easy to use and consistent framework to generate arbitrarily complex crossectional and longitudinal data. The aim of the package is to make all three steps of the data generation process easier by giving users a standardized way to define the desired DGP, which can then be used directly to generate the data without further user input. It does so by requiring the user to define a directed acyclic graph (DAG) with additional information about the associations between the supplied variables [@Pearl2009]. The package was created using the R
programming language [@RCT2024] and is available on the Comprehensive R
Archive Network (CRAN) at https://cran.r-project.org/package=simDAG.
In this package, the user is required to describe the desired DGP as a causal DAG. Formally, a DAG is a mathematical graph consisting of a set of $V$ nodes (or vertices) and a set of $E$ edges (or links) connecting pairs of nodes. As its' name suggests, a DAG consists only of directed edges and is acyclic, meaning that there are no cycles when following directed paths on the DAG [@Byeon2023]. A causal DAG is a special sort of DAG in which the nodes represent random variables and the edges represent directed causal relationships between these variables [@Pearl2009]. A very simple example containing only three nodes and no time-dependencies is given in Figure 1. The DAG in this figure contains a directed arrow from $A$ to $C$ and from $B$ to $C$. This translates to the assumptions that there is a direct causal effect of $A$ on $C$ and of $B$ on $C$, but no direct causal relationship between $A$ and $B$ (due to the absence of an arrow between them).
knitr::include_graphics("./images_v_joss/example_dag.png")
Such DAGs are the cornerstone of the structural approach to causal inference developed by @Pearl2009 and @Spirtes2000. They are used extensively in social research [@Wouk2019], econonomics [@Imbens2020] and epidemiology [@Byeon2023] to encode causal assumptions about the real underlying DGP of empirical data. For empirical research such graphs are very useful because they give a clear overview of the causal assumptions made by the researchers. By using causal graphical methods such as the backdoor criterion [@Pearl2009] or the frontdoor criterion [@Pearl1995], it is also possible to use such graphs to determine which variables need to be adjusted for in order to get unbiased estimates of certain causal effects. The daggitty
R
package directly implements multiple tools for this kind of usage [@Textor2016].
These kind of DAGs can be formally described using structural equations. These equations describe how each node is distributed. For example, a general set of structural equations that may be used to describe the DAG in Figure 1 are:
$$ \begin{aligned} A \sim & f_A(U_A), \ B \sim & f_B(U_B), \ C \sim & f_C(A, B, U_C). \ \end{aligned} $$
In these equations, the unspecified functions $f_A$, $f_B$ and $f_C$ describe how exactly the nodes are distributed, possibly conditional on other nodes. The terms $U_A$, $U_B$ and $U_C$ denote random errors or disturbances. If the functions in these structural equations are not specified and some assumption on the probability distribution of the error terms is made, this is equivalent to a non-parametric structural equation model [@Pearl2009; @Sofrygin2017].
To make the generation of data from a DAG possible, however, it is not enough to only specify which variables are causally related to one another. The structural equations now also have to be fully specified. This means that the distribution functions of each node will have to be defined in some way by the user. A popular way to do this is to use regression models and parametric distributions [@Kline2023], but in theory any kind of function may be used, allowing the definition of arbitrarily complex DGPs. Continuing the example from above, we could define the structural equations of the DAG as follows:
$$ \begin{aligned} A \sim & N(0, 1), \ B \sim & N(0, 1), \ C \sim & -2 + A\cdot0.3 + B\cdot-2 + N(0, 1). \ \end{aligned} $$
This means that both $A$ and $B$ are independent standard normally distributed variables and that $C$ follows a simple linear regression model based on $A$ and $B$ with an independent normally distributed error term with mean zero. Once all structural equations and distribution functions have been defined, data may be generated from the DAG using a fairly simple algorithm. This algorithm essentially generates data for one node at a time, using only the supplied definitions and the data generated in previous steps. This step-wise method relies on the fact that every DAG can be topologically sorted, which means that there is always an ordering of the nodes such that for every link $(u_i, u_j)$ between nodes $u_i$ and $u_j$, $u_i$ comes before $u_j$ [@Kahn1962].
The generation of the data starts by ordering the nodes of the graph in such a topologically sorted way. This means that nodes in the DAG that have no arrows pointing into them, sometimes called root nodes, are processed first. Data for these kinds of nodes can be generated by sampling from a pre-specified parametric distribution, such as a Gaussian distribution or a beta distribution, or through any other process, such as re-sampling based strategies [@Carsey2014]. Once data for these nodes has been generated, the data for the next node in the topological order will be generated, based on the already generated data. These nodes are called child nodes, because they are dependent on other nodes, which are called their parent nodes [@Byeon2023]. For the example DAG shown earlier, the two possible topological sortings are:
$$ (A, B, C) \quad \text{and} \quad (B, A, C). $$
Here, both $A$ and $B$ are root nodes because they do not have any parents and $C$ is a child node of both of its' parents $A$ and $B$. To generate data for this example using the algorithm described above, one would first generate $n$ random draws from a standard normal distribution for both $A$ and $B$. Next, one would calculate the linear combination of these values as specified by the linear regression model in the earlier Equation and add $n$ random draws from another standard normal distribution to it (which represents the error term). In R
, this simple example could be simulated using the following code:
set.seed(43) n <- 100 A <- stats::rnorm(n) B <- stats::rnorm(n) C <- -2 + A*0.3 + B*-2 + stats::rnorm(n)
Although the manual code required for this example is fairly simple, this is no longer the case in DAGs with more nodes and/or a more complex DGP (for example one including different data types). The simDAG
package offers a standardized way to define any possible DAG and the required distribution functions to facilitate a clear and reproducible workflow.
The previous explanations and the given example focused on the simple case of crossectional data without any time-dependencies. It is, however, fairly straightforward to include a time-varying structure into any DAG as well by simply adding a time-index to the time-varying nodes and repeating the node for each point in time that should be considered [@Hernan2020]. The proposed package features computationally efficient functions to automate this process for large amounts of time-points using a discrete-time simulation approach [@Tang2020]. Although this procedure relies on a discrete time scale, it can be used to generate data on a semi-continuous time-scale by using very small steps in time. This is described in more detail in a later Section.
Note also that while causal DAGs imply a specific causal structure, the algorithms and code described here do not necessitate that this causal structure has to be interpreted as such in the generated data. For example, the structural equations shown earlier state that $A$ and $B$ are direct causes of $C$, but the datasets that can be generated from these equations could also be interpreted as $A$ and $B$ being only associated with $C$ for unknown reasons. As long as the desired DGP can be described as a DAG, which is almost always the case, this strategy may be used effectively to generate data even for Monte-Carlo studies not concerned with causal inference.
Although the data-generation algorithm described above is appropriate for most applications, it may not be the best choice for validating causal discovery methods, due to the marginal variance of each variable increasing along the order of the topological sorting [@Reisach2021]. Other methods, such as the onion method proposed by @Andrews2024 may be preferable in this particular case.
There are many different software packages that may be used to generate data in the R
programming language and other languages such as Python
. It is infeasible to summarise all of them here, so we will only focus on a few that offer functionality similar to the simDAG
package instead. The following review is therefore not intended to be exhaustive. We merely aim to show how the existing software differs from the proposed package.
Multiple R
packages support the generation of synthetic data from fully specified structural equation models. The lavaan
package [@Rosseel2012], the semTools
package [@Jorgensen2022] and the simsem
package [@Pornprasertmanit2021] are a few examples. However, these packages focus soley on structural equation models with linear relationships and as such do not allow the generation of data with different data types. For example, none of these packages allow the generation of time-to-event data, which is a type of data that is often needed in simulation studies. Specialized R
packages such as the survsim
[@Morina2017], simsurv
[@Brilleman2021], rsurv
[@Demarqui2024] and reda
[@Wang2022] packages may be used to simulate such data instead. Although some of these packages allow generation of recurrent events, competing events and general multi-state time-to-event data, unlike the simDAG
package, none of them support arbitrary mixtures of these data types or time-varying covariates.
Other packages, such as the simPop
package [@Templ2017] and the simFrame
package [@Alfons2010] allow generation of more complex synthetic data structures as well, but are mostly focused on generating data that mimicks real datasets. Similarly, the simtrial
package offers very flexible tools for the generation of randomized controlled trial data, but it would be difficult to use it to generate other data types. Software directly based on causal DAGs as DGPs also exists. Although it is not stated in the package documentation directly, the simstudy
package [@Goldfeld2020] also relies on the DAG based algorithm described earlier. It supports the use of different data types and custom generation functions, but only has partial support for generation of longitudinal data. Alternatively, the Python
library DagSim
[@Hajj2023] allows users to generate arbitrary forms of data, while also allowing the user to supply custom functions for the data generation process. The price for this flexibility is, however, that not many default options are implemented in the library.
Finally, the simcausal
R
package [@Sofrygin2017] is very similar to the simDAG
package and was in fact a big inspiration for it. Like the simDAG
package, it is also based on the causal DAG framework. The syntax for defining a DAG is nearly the same, with some differences in how formula objects can and should be specified. Unlike the proposed package, however, the simcausal
package is focused mostly on generating data for simulation studies dealing with causal inference. As such, it also directly supports the generation of data after performing some interventions on the DAG [@Pearl2009]. Although the proposed package lacks such functionality, it is a lot more flexible in terms of what data can be generated. simDAG
supports the use of arbitrary data generation functions, the definition of interactions, non-linear relationships and mixed model syntax in its' formula interface and categorical input data for nodes. None of these features are present in simcausal
[@Sofrygin2017].
First, we will introduce the most important functionality of the proposed package by describing the core functions and the usual workflow when employing the package. This includes a detailed description of how to translate the theoretical description of the desired DGP into a DAG
object, which may then be used to generate data. Afterwards we will illustrate the capabilities of the package by reproducing the DGPs of multiple real Monte-Carlo simulation studies. Two DGPs describing the generation of crossectional data and longitudinal data with only few considered points in time will be considered first. Afterwards, an introduction into the generation of more complex longitudinal data utilizing the discrete-time simulation approach is presented. Finally, the package and its potential usefulness is discussed.
The following functions are used in a typical workflow using the simDAG
R
package.
| | |
| --------- | ----------- |
| empty_dag()
| Initializes an empty DAG
object, which should be later filled with information on relevant nodes. DAG
objects are the most important data structure of this package. How to define and use them is illustrated in more detail below. |
| node()
and node_td()
| Can be used to define one or multiple nodes each. These functions are typically used to fill the DAG
objects with information about how the respective node should be generated, e.g., which other nodes it depends on (if any), whether it is time-dependent or not, what kind of data type it should be and the exact structural equation that it should follow. node()
can only be used to define nodes at one specific point in time, while the node_td()
function should only be used to define time-varying nodes for discrete-time simulations. |
| add_node()
or DAG + node
| Allows the definition made by node()
or node_td()
to be added to the DAG
object. |
| plot.DAG()
| Directly plots a DAG
object using the ggplot2
library [@Wickham2016]. |
| summary.DAG()
| May be used to print the underlying structural equations of all nodes in a DAG
object. |
| sim_from_dag()
| Is one of the two core simulation functions. Given a fully specified DAG
object that only includes nodes defined using node()
, it randomly generates a data.table
[@Barrett2024] according to the DGP specified by the DAG
. |
| sim_discrete_time()
| Is the second core simulation function. Given a fully specified DAG
object that includes one or multiple nodes added using the node_td()
function, and possibly one or multiple nodes added using the node()
function, it randomly generates data according to the specified DGP using a discrete-time simulation approach. This is described in detail in a later Section. |
| sim_n_datasets()
| Allows users to directly generate multiple datasets from a single DAG
, possibly using parallel processing. |
| sim2data()
| May be used to transform the output produced by the sim_discrete_time()
function into either the wide, long or start-stop format to make further usage of the generated data easier. |
The package additionally includes multiple functions starting with node_
. These functions are used to generate data of different types and using different specifications. Usually they do not have to be called directly by the user. Instead they are specified as type
s in the node()
or node_td()
functions and only called internally. Some additional convenience functions are also included, but are not further discussed in this article. Instead we choose to focus on describing the core functionality in detail and refer the interested user to the official documentation for more information.
Regardless of which kind of data the user want to generate, it is always necessary to first define a DAG
object which adequately describes the DGP the user wants to simulate. In most cases this should be done using the following steps:
DAG
object using the empty_dag()
function.node()
or node_td()
functions.DAG
using the +
syntax or the add_node()
function.The empty_dag()
function is very simple, as it does not have any arguments. It is merely used to setup the initial DAG
object. The actual definition of the nodes should be done using either the node()
function (node at a single point in time) or node_td()
function (node that varies over time), which have the following syntax:
node(name, type, parents=NULL, formula=NULL, ...) node_td(name, type, parents=NULL, formula=NULL, ...)
The arguments are:
name
: A character string of the name of the node that should be generated, or a character vector including multiple names. If a character vector with more than one name is supplied, multiple independent nodes with the same definition will be added to the DAG
object.type
: A character string specifying the type of the node or any suitable function that can be used to generate data for a node. Further details on supported node types are given in the next Section.parents
: If the node is a child node this argument should contain the names of its' parents, unless a formula
is supplied instead.formula
: An optional formula object that specifies an additive combination of coefficients and variables as required by generalized linear models for example. This argument may be used for most built-in node types and for user-defined functions. It allows inclusion of internally generated dummy variables (when using categorical variables as parents), interaction terms of arbitrarily high orders, cubic terms and arbitrary mixed model syntax for some node types....
: Additional arguments passed to the function defined by the type
argument.For example, the simple DAG that we described earlier may be created using the following code:
library("simDAG") dag <- empty_dag() + node(c("A", "B"), type="rnorm", mean=0, sd=1) + node("C", type="gaussian", formula=~-2 + A*0.3 + B*-2, error=1)
First, an empty DAG
object is initialized using the empty_dag()
function, to which nodes are added directly using the +
syntax. Here, only the node()
function is required, because all nodes only have to be defined for a single point in time, since this DAG is only supposed to describe crossectional data. Additionally, since both $A$ and $B$ have the same structural equation here, only one call to node()
is needed to define both of these nodes. By setting type="rnorm"
and leaving both the parents
and the formula
arguments at their default values, these nodes are specified as root nodes for which values will be generated using the rnorm()
function with the additional arguments passed afterwards. Because $C$ is supposed to follow a linear regression model, type="gaussian"
is used here and the structural equation is specified using the formula
argument.
The result is a DAG
object. To re-create Figure 1, users may use the associated S3 plot()
method, which internally uses the igraph
package [@Csardi2024] to put the nodes into position and the ggplot2
package [@Wickham2016] and the ggforce
[@Pedersen2022] for the actual plotting:
library("igraph") library("ggplot2") library("ggforce") plot(dag, layout="as_tree", node_size=0.15, node_fill="grey", node_text_fontface="italic")
The output is not exactly the same as Figure 1 because of space reasons, but it is very similar. Additionally, the underlying structural equations may be printed directly using the associated S3 summary()
method:
summary(dag)
Both of these methods may be useful to check whether the DAG is defined as intended by the user, or to formally describe the DGP in a publication.
Different type
s of nodes are supported, depending on what kind of node is being specified. If the node is a root node, any function with an argument called n
that specifies how many observations of some kind should be generated may be used. For example, the base R
runif()
function may be used as a type to generate a uniformly distributed node. Other popular choices are rnorm()
for normally distributed nodes or rgamma()
for a gamma distributed node. For convenience, the proposed package also includes implementations for fast Bernoulli trials (rbernoulli()
), fast sampling from discrete probability distributions (rcategorical()
) and a function to set a node to a constant value (rconstant()
).
If the node has one or more parent nodes, any function that can generate data based on these nodes and has the arguments data
(containing the data generated up to this point) and parents
(a vector with the names of the parent nodes) may be used as type
. Multiple popular choices are directly implemented in this package. These include nodes based on generalized linear models and potentially more complex functions to sample from conditional distributions. Finally, the package also includes two specialized functions which may only be used for discrete-time simulations. These functions are able to generate binary or categorical time-varying covariates, as well as multiple forms of time-to-event data by repeatedly performing Bernoulli trials or multinomial trials over time. More details are given in the Section on discrete-time simulation. A brief overview over all implemented node types is given in the following Table. Note that when using these node types, the user may either pass the respective function directly to the type
argument, or use a string of the name without the node_
prefix.
| Node type | Description |
| --------- | ----------- |
| rbernoulli()
| Samples from a Bernoulli distribution |
| rcategorical()
| Samples from a discrete probability distribution |
| rconstant()
| Sets the node to a constant value |
| | |
| node_gaussian()
| Generates a node based on a (mixed) linear regression model |
| node_binomial()
| Generates a node based on a (mixed) logistic regression model |
| node_conditional_prob()
| Samples from a conditional discrete probability distribution |
| node_conditional_distr()
| Samples from different distributions conditional on values of other variables |
| node_multinomial()
| Generates a node based on a multinomial regression model |
| node_poisson()
| Generates a node based on a (mixed) Poisson regression model |
| node_negative_binomial()
| Generates a node based on a negative binomial regression model |
| node_zeroinfl()
| Generates a node based on a zero-inflated Poisson or negative binomial regression model |
| node_identity()
| Generates a node based on an R
expression that includes previously generated nodes |
| node_mixture()
| Generates a node as a mixture of other node types |
| node_cox()
| Generates a node based on a Cox proportional hazards regression model, using the method of @Bender2005 |
| node_aftreg()
, node_ahreg()
, node_ehreg()
, node_poreg()
, node_ypreg()
| Generates a node based on various parametric survival models as implemented in rsurv
[@Demarqui2024] |
| | |
| node_time_to_event()
| A node based on repeated Bernoulli trials over time, intended to generate time-varying variables and time-to-event nodes |
| node_competing_events()
| A node based on repeated multinomial trials over time, intended to generate time-varying variables and time-to-event nodes |
Table 1: A brief overview over all implemented node types. The first section contains functions that may only be used to generate root nodes, while the second section contains functions that may only be used to generate child nodes. Nodes in both of these sections may be used in both node()
and node_td()
calls. The nodes mentioned in the third section are only meant to be used for discrete-time simulations.
In the following Section we will illustrate how to use the simDAG
package to simulate more complex crossectional data. Instead of using a made up artificial example, we will do this by partially replicating a real Monte-Carlo simulation study by @Denz2023, published in the prestigious peer-reviewed journal Statistics in Medicine. Because this package strictly focuses on the data generation step of Monte-Carlo studies and for reasons of brevity, we will not reproduce the entire simulation study. Instead we will only replicate the DGP used to generate the data for this study.
@Denz2023 recently performed a neutral comparison study of multiple different methods to estimate counterfactual survival curves from crossectional observational data. In this Monte-Carlo simulation study they wanted to investigate how different kinds of misspecifications of nuisance models, which are used in some methods to estimate the counterfactual survival curves, affect the estimates produced by the different methods. To do this, a DGP was required that includes multiple interrelated binary and continuous variables, as well as a right-censored time-to-event variable.
In particular, the data sets they generated for each simulation run included six covariates, two of which were binary ($X_1, X_3$) and four of which were continuous ($X_2, X_4, X_5, X_6$). It additionally included a binary treatment variable ($Z$) and a right-censored time-to-event outcome ($T$). The two binary covariates $X_1$ and $X_3$ followed a simple Bernoulli distribution with a success probability of 0.5. $X_2$ was generated by a linear regression model, dependent on $X_3$. The two continuous covariates $X_4$ and $X_6$ were standard normally distributed, while $X_5$ was generated according to a linear regression model dependent on $X_6$. The treatment variable $Z$ followed a logistic regression model, dependent on $X_2$, $X_3$, $X_5$ and $X_6$, where $X_2$ was included as a squared term. Finally, the outcome $T$ was generated according to a Cox model, dependent on $X_1$, $X_2$, $X_4$, $X_5$ and $Z$, where $X_5$ was included as a squared term. Data for $T$ was generated using the method by [@Bender2005], based on a Weibull distribution ($\lambda=2, \gamma=2.4$). The time until censoring was generated independently from a second Weibull distribution ($\lambda=1, \gamma=2$).
This DGP was used because it includes two confounders ($X_2$, $X_5$) for the causal effect of $Z$ on $T$, which are correlated with other non-confounding variables. The inclusion of non-linear relationships allowed @Denz2023 to investigate different kinds of misspecified models. More details on the DGP and the simulation study itself are given in the original manuscript. To replicate the DGP of this study, the following DAG
definition may be used:
dag <- empty_dag() + node(c("X1", "X3"), type="rbernoulli", p=0.5, output="numeric") + node(c("X4", "X6"), type="rnorm", mean=0, sd=1) + node("X2", type="gaussian", formula=~0.3 + X3*0.1, error=1) + node("X5", type="gaussian", formula=~0.3 + X6*0.1, error=1) + node("Z", type="binomial", formula=~-1.2 + I(X2^2)*log(3) + X3*log(1.5) + X5*log(1.5) + X6*log(2), output="numeric") + node("T", type="cox", formula=~X1*log(1.8) + X2*log(1.8) + X4*log(1.8) + I(X5^2)*log(2.3) + Z*-1, surv_dist="weibull", lambda=2, gamma=2.4, cens_dist="rweibull", cens_args=list(shape=1, scale=2))
As before, we can define the nodes $X_1$ and $X_3$ and the nodes $X_4$ and $X_5$ using a single call to node()
, because they have the same definition. Since only directly supported regression models are required for this DGP, we were able to use the formula
argument to directly type out all of the structural equations.
Plotting this DAG
using the plot()
method produces the following output:
plot(dag, node_size=0.3, node_fill="grey", node_text_fontface="italic")
The underlying structural equations are:
summary(dag)
Finally, we can generate a single data.table
from this DAG
using the sim_from_dag()
function. The first few rows of the generated data look like this:
library("data.table") dat <- sim_from_dag(dag, n_sim=500) head(round(dat, 3))
Although only eight nodes were defined in the corresponding DAG
, it includes nine columns. The reason for this is that nodes generated using type="cox"
by default generate two columns: one for the observed time and one indicating whether the observation is right-censored or not. This is the standard data format for such time-to-event data and one of the reasons supplied node functions are allowed to output multiple columns at the same time. This of course also extends to custom user-defined node type
s. When no censoring distribution is supplied, it would also be possible to return only the simulated time-to-event in a single column by setting as_two_cols=FALSE
in the node()
call for $T$.
For many applications the desired DGP may contain one or more variables that change over time. If that is the case, longitudinal data must be generated. There are two approaches to do this using the algorithm implemented in the proposed package: (1) defining one node per point in time of interest and (2) constructing one node definition for all points in time that should be considered simultaneously. In this Section the first approach will be illustrated by replicating the DGP that was used in the Monte-Carlo simulation study performed by @Gruber2015.
In their study, @Gruber2015 compared the efficiency of different methods of estimating inverse probability weights for marginal structural models. Such models are often used to estimate marginal causal effects of treatment-regimes from longitudinal observational data with time-varying variables [@Hernan2020]. The main aim was to quantify the benefits of using ensemble methods versus using traditional methods, such as logistic regression models, when estimating the required weights. Their DGP therefore required the inclusion of time-varying variables. Below we focus on the DGP theses authors used in their first simulation scenario.
Their DGP consisted of four variables: a binary treatment variable $A$, a binary unmeasured baseline covariate $U$, a continuous confounder $L$ and a binary outcome $Y$. Since $U$ represents a baseline variable, it does not vary over time. All other variables, however, were generated at two distinct time points. $L$ and $A$ were generated for time $0$ and $1$, while the time-lagged outcome $Y$ was generated for times $1$ and $2$. For simplicity, $A$ was affected only by present and past $L$ and not by previous values of $A$ itself. $L$ on the other hand was caused by previous values of itself and by $A$ and $U$. Finally, $Y$ is only caused by $U$, meaning that neither the treatment nor the confounder have any actual effect on the outcome. To generate continuous child nodes they relied on linear regression models. For binary child nodes, logistic regression models were used. A more detailed description of the data generation process is given in the original article [@Gruber2015]. A DAG
object to define this DGP in the proposed package is given below.
dag <- empty_dag() + node("U", type="rbernoulli", p=0.5, output="numeric") + node("L0", type="gaussian", formula=~0.1 + 0.6*U, error=1) + node("A0", type="binomial", formula=~-0.4 + 0.6*L0, output="numeric") + node("Y1", type="binomial", formula=~-3.5 + -0.9*U, output="numeric") + node("L1", type="gaussian", formula=~0.5 + 0.02*L0 + 0.5*U + 1.5*A0, error=1) + node("A1", type="binomial", formula=~-0.4 + 0.02*L0 + 0.58*L1, output="numeric") + node("Y2", type="binomial", formula=~-2.5 + 0.9*U, output="numeric")
Shown graphically, the DAG
looks like this:
plot(dag, node_size=0.2, node_fill="grey", node_text_fontface="italic", layout="in_circle")
The structural equations can again be printed using the summary()
function:
summary(dag)
Finally, we can call the sim_from_dag()
function on this DAG
to generate some data:
dat <- sim_from_dag(dag, n_sim=1000) head(dat)
Because the variables $A$, $L$ and $Y$ should be generated at different points in time, we have to include one node definition per variable per point in time to get an appropriate DAG
. Apart from that the syntax is exactly the same as it was when generating crossectional data. More points in time could be added by simply adding more calls to node()
to the DAG
object, using appropriate regression models. The main advantage of this method is that it allows flexible changes of the DGP over time. However, the obvious shortcoming is that it does not easily extend to scenarios with many points in time. Although the authors only considered three time-varying variables and two distinct points in time here, the DAG
definition is already quite cumbersome. For every further point in time, three new calls to node()
would be necessary. If hundreds of points in time should be considered, using this method is essentially in-feasible. To circumvent these problems we describe a slightly different way of generating longitudinal data below.
Instead of defining each node for every point in time separately, it is often possible to define the structural equations in a generic time-dependent fashion, so that the same equation can be applied at each point in time, simplifying the workflow considerably. The result is the description of a specific stochastic process. For example, consider a very simple DAG with only one time-dependent node $Y$ at three points in time, $t \in {0, 1, 2}$:
set.seed(129) dag <- empty_dag() + node("Y_0", type="rnorm") + node("Y_1", type="binomial", formula=~ 1 + Y_0*2) + node("Y_2", type="binomial", formula=~ 1 + Y_1*3) plot(dag, node_size=0.3, node_fill="grey", node_text_fontface="italic")
In this DAG, $Y_t$ is only caused by values of itself at $t-1$. Suppose that $Y$ is a binary event indicator that is zero for everyone at $t = 0$. At every point in time $t$, $Y$ is set to 1 with a probability of 0.01. Once $Y$ is set to 1, it never changes back to 0. The following structural equation may be used to describe this DAG:
$$ Y_t \sim Bernoulli(P_Y(t)), $$
where
$$ P_Y(t) = \begin{cases} 0 & \text{if} \quad t = 0 \ 1 & \text{if} \quad Y_{t-1} = 1 \ 0.01, & \text{otherwise} \end{cases}. $$
The number of points in time could be increased by an arbitrary number and the same structural equation could still be used. Note that the time points may stand for any period of time, such as minutes, days or years. This is of course a trivial example, but this approach may also be used to define very complex DGPs, as illustrated below. For example, arbitrary dependencies on other variables measured at the same time or at any earlier time may be used when defining a node. To generate data from this type of DAG, the same algorithm as described in the first Section may be used. Since only discrete points in time are considered, this type of simulation has also been called discrete-time simulation [@Tang2020] or dynamic microsimulation [@Spooner2021] in the literature and is closely related to discrete-event simulation [@Banks2014].
To generate data for the simple example considered above in the proposed package, we first have to define an appropriate DAG
object as before. This can be done using the node_td()
function with type="time_to_event"
as shown below.
dag <- empty_dag() + node_td("Y", type="time_to_event", prob_fun=0.01, event_duration=Inf)
By default, the value of node defined using type="time_to_event"
is 0 for all individuals at $t = 0$. The prob_fun
argument defines the function that determines the occurrence probability at each point in time. It is set to 0.01 here, indicating that for all individuals and regardless of the value of $t$ the probability of experiencing the event is constant. Usually this argument will be passed an appropriate function to generate the occurrence probability for each individual at each point in time separately, but this is not necessary yet. By setting the event_duration
argument to Inf
, we are indicating that the all events have an infinite duration, making the first event the final event for each person. In contrast to before, the sim_discrete_time()
function now has to be used to generate data from this DAG
object, because it contains a time-varying node:
sim <- sim_discrete_time(dag, n_sim=1000, max_t=80)
The max_t
argument specifies how many points in time should be simulated. Instead of just three points in time we consider 80 here. Contrary to the sim_from_dag()
function, the sim_discrete_time()
function does not return a single data.table
. Instead it returns a simDT
object, which usually has to be processed further using the sim2data()
function to be useful. In this case, however, it is enough to simply extract the last state of the simulation to get all the information we need. This last simulation state is stored in the $data
parameter of the simDT
object:
head(sim$data)
As specified, the simulation contains only the variable $Y$, split into two columns. The first is called Y_event
and is a binary indicator of whether the individual is currently experiencing an event. The second column called Y_time
shows the time at which that event happened, or is set to NA
if there is no event currently happening. Since every event is final, this is all information that was generated here. Individuals with a value of NA
in Y_time
can be considered right-censored at $t = 80$. In this trivial example, it would be a lot easier to generate equivalent data by sampling from an appropriate parametric distributions. The following Section will illustrate the benefits of the approach using a more involved example.
Suppose that we want to generate data for the Covid-19 pandemic, containing individual level information about Covid-19 vaccinations denoted by $A$ and the development of an acute myocarditis denoted by $Y$. Different individuals get vaccinated at different times, possible multiple times. Additionally, they might experience zero or multiple cases of myocarditis, also at different times. Both variables are therefore time-dependent binary variables, which are related to one another. If the target of interest is to estimate the effect of the vaccination on the time until the occurrence of a myocarditis, the vaccination may be considered a time-dependent variable and the myocarditis as a non-terminal, possibly recurrent, time-to-event outcome. This setup was of interest to @Denz2023a, who performed a Monte-Carlo simulation study to investigate the impact of linkage errors when estimating vaccine-safety from observational data. Below we will illustrate a simplified version of the DGP used therein.
For simplicity, we will make multiple simplifying assumptions that were not made in the original simulation study. First, we assume that the probability of being vaccinated and the base probability of developing a myocarditis are constant over time and equal for all individuals. The only risk factor for developing a myocarditis in this example is the Covid-19 vaccination itself. More precisely, the structural equation for the myocarditis node at $t$ is given by:
$$ Y_t \sim Bernoulli(P_{Y}(t)), $$
with:
$$ P_{Y}(t) = \begin{cases} P_{Y0} \cdot RR_{A}, & \text{if } t \in \left[T_{A}(t), T_{A}(t) + d_{risk}\right] \ P_{Y0}, & \text{otherwise} \end{cases}, $$
where $P_{Y0}$ denotes the base probability of developing a myocarditis, $T_{A}(t)$ denotes the time of the last performed vaccination and $d_{risk}$ defines the duration after the vaccination in which the risk of developing a myocarditis is elevated by $RR_{A}$. In this particular case, each $t$ will represent a single day. Similarly, the vaccination node can be described formally as:
$$ A_t \sim Bernoulli(P_{A}(t)), $$
with:
$$ P_{A}(t) = \begin{cases} 1, & \text{if } t \in \left[T_{A}(t), T_A(t) + 20 \right] \ 0, & \text{if } t \in \left[T_{A}(t) + 21, T_A(t) + 150\right] \ P_{A0}, & \text{otherwise} \end{cases}, $$
where $P_{A0}$ denotes the base probability of getting vaccinated. The Figure below illustrates the result of applying these structural equations to a fictional person who gets vaccinated at $t = 100$.
```r = 0.01$, $P_{Y0} = 0.005$, $d_{risk} = 20$ and $RR_A = 3.24$."), echo=FALSE, out.width=600} knitr::include_graphics("./images_v_joss/example_probs.png")
The following code may be used to define this DGP: ```r prob_myoc <- function(data, P_0, RR_A) { fifelse(data$A_event, P_0*RR_A, P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24)
First, we define a function that calculates $P_Y(t)$ at each simulated day for all individuals, called prob_myoc()
. This function simply checks whether the binary event indicator of the vaccination event, "A_event"
, is currently TRUE
and multiplies the baseline probability of developing a myocarditis $P_{Y0}$ with the relative risk if this is the case. Otherwise it just returns the baseline probability directly. This function is then passed directly to the prob_fun
argument in the node_td()
call when defining the myocarditis node. For this to be a sensible strategy, we need to ensure that "A_event"
is only TRUE
when a person is currently at elevated risk for a myocarditis, as defined in the earlier Equation. In other words, "A_event"
should not actually be an indicator of whether someone just received a vaccination, but an indicator of whether the person is currently in the risk period of 20 days following the vaccination. We can achieve this by setting the event_duration
parameter in the node definition of the vaccination node to 20, meaning that the vaccination node will equal 1 for 20 days after a vaccination was performed. This argument is a direct feature of the node_time_to_event()
function, which is called internally whenever type="time_to_event"
is used in a time-dependent node.
The base probability for the vaccination and for the myocarditis events are set to the arbitrary values of 0.01 and 0.005 respectively. $RR_{A}$ is set to 3.24, which is the value used in the actual simulation study by @Denz2023a. The immunity_duration
parameter used for the vaccination node additionally specifies that a person will not receive another vaccination in the first 150 days after a vaccination was performed. More specifically, these settings ensure that "A_event"
is set to FALSE
for 130 days after the event_duration
of 20 days is over. This is another feature of "time_to_event"
nodes. To run the simulation for two simulated years, the following code may be used:
sim <- sim_discrete_time(dag, n_sim=10000, max_t=365*2)
Since both of the included variables change over time and may have multiple events, it is not appropriate to just look at the last state of the simulation in this case. Instead we will have to use the sim2data()
function to obtain a useful dataset. The following code may be used to get a dataset in the start-stop format:
dat <- sim2data(sim, to="start_stop", target_event="Y", keep_only_first=TRUE, overlap=TRUE) head(dat)
In this format, there are multiple rows per person, corresponding to periods of time in which all variables stayed the same. These intervals are defined by the start
column (time at which the period starts) and stop
columns (time at which the period ends). The overlap
argument specifies whether these intervals should be overlapping or not. By setting target_event="Y"
, the function treats the Y
node as the outcome instead of as another time-dependent covariate. The resulting data is in exactly the format needed to fit standard time-to-event models, such as a Cox model with time-varying covariates [@Zhang2018a]. Using the survival
package [@Therneau2024], we can do this using the following code:
library("survival") mod <- coxph(Surv(start, stop, Y) ~ A, data=dat) summary(mod)
Because the DGP fulfills all assumptions of the Cox model and the sample size is very large, the resulting hazard ratio is close to the relative risk of 3.24 that were specified in the DAG
object. Other types of models, such as discrete-time survival models, may require different data formats [@Tutz2016]. Because of this, the sim2data()
function also supports the transformation into long- and wide- format data.
Above we assumed that the presence of a myocarditis event has no effect on whether a person gets vaccinated or not. In reality, it is unlikely that a person who is currently experiencing a myocarditis would get a Covid-19 vaccine. We can include this into the DGP by modifying the prob_fun
argument when defining the vaccination node as shown below:
prob_vacc <- function(data, P_0) { fifelse(data$Y_event, 0, P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=prob_vacc, event_duration=20, immunity_duration=150, P_0=0.01) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24, event_duration=80)
In this extension, the probability of getting vaccinated is set to 0 whenever the respective person is currently experiencing a myocarditis event. By additionally setting the event_duration
parameter of the myocarditis node to the arbitrary value of 80, we are defining that in the 80 days after experiencing the event there will be no vaccination for that person. Data could be generated and transformed using the same code as before.
The example shown above illustrates only a small fraction of the possible features that could be included into the DGP using the discrete-time simulation algorithm implemented in the proposed package. Below we name more features that could theoretically be added to the DGP. Examples for how to actually implement these extensions are given in the appendix.
Time-Dependent Probabilities and Effects: In the example above both the base probabilities and the relative risks were always set to a constant, regardless of the simulation time. This may also be changed by re-defining the prob_fun
argument to have a named argument called sim_time
. Internally, the current time of the simulation will then be passed directly to the prob_fun
, allowing users to define any time-dependencies.
Non-Linear Effects: Above, we only considered the simple scenario in which the probability of an event after the occurrence of another event follows a simple step function, e.g., it is set to $P_0$ in general and increased by a fixed factor in a fixed duration after an event. This may also be changed by using more complex definitions of prob_fun
that are not based only on the _event
column of the exposure, but directly on the time of occurrence.
Adding more Binary Time-Dependent Variables: Only two time-dependent variables were considered in the example. It is of course possible to add as many further variables as desired by the user by simply adding more appropriate node_td()
calls to the DAG
object. Both the sim_discrete_time()
function and the sim2data()
function will still work exactly as described earlier.
Adding Baseline Covariates: It is of course also possible to define a DAG
that contains both time-independent and time-dependent variables at the same time. This can be achieved by adding nodes with the node()
and node_td()
function. Data for the time-independent variables will then be generated first (at $t = 0$) and the time-dependent simulation will be performed as before, albeit possibly dependent on the time-independent variables.
Adding Categorical Time-Dependent Variables: Sometimes it may not be appropriate to describe a time-varying variable using only two classes. For theses cases, the proposed package also includes the "competing_events"
node type, which is very similar to the "time_to_event"
node type. The only difference is that instead of Bernoulli trials multinomial trials are performed at each point in time for all individuals, allowing the generation of mutually exclusive events.
Adding Arbitrarily Distributed Time-Dependent Variables: As in the sim_from_dag()
function, any kind of function may be used as a type
for time-dependent variables. By appropriately defining those, it is possible to generate count and continuous time-varying variables as well.
Adding Ordered Events: Some events should only be possible once something else has already happened. For example, a person usually cannot receive a PhD before graduating high school. This type of data may also be generated, again by appropriately specifying the required probability functions.
All of the mentioned and shown features may of course be used in arbitrary combinations. Additionally, the list above is not meant to be exhaustive. It is merely supposed to highlight how versatile the implemented approach is.
Although the discrete-time simulation procedure described above is incredibly flexible, it is also very computationally expensive theoretically. For example, when using nodes of type "time_to_event"
. as shown above, the program has to re-calculate the event probabilities at each point in time, for every individual and for every node. Increasing either the number of nodes, the number of time points or the number of considered individuals therefore non-linearly increases the number of required computations. This fact, together with the lack of available appropriate software packages, is probably the main reason this type of simulation strategy is rarely used in Monte-Carlo simulation studies. Since Monte-Carlo studies often require thousands of replications with hundreds of different scenarios, even a runtime of a few seconds to generate a dataset may be too long to keep the simulation computationally feasible.
The presented implementation was therefore designed explicitly to be as fast as possible. To ensure that it can be run on multiple processing cores in parallel, it is additionally designed to use as little random access memory (RAM) as possible. As all other code in this package, it exclusively uses the data.table
package [@Barrett2024] to perform most required computations. The data.table
package is arguably the best choice for doing data wrangling in the R
ecosystem in terms of computational efficiency and has similar performance as corresponding software libraries in R
and Julia
[@Chiou2023]. The proposed package additionally relies on a few tricks to keep the amount of memory used small. For example, when using nodes of type "time_to_event"
, by default not every state of the simulation is saved. Only the times at which an event occurred are recorded. This information is then efficiently pieced together in the sim2data()
function when creating the actual dataset.
Although none of these code optimizations can entirely overcome the inherent computational complexity of the approach, they do ensure that the usage of this method is feasible to generate reasonably large data with many points in time on a regular office computer. For example, generating a dataset with 1000 individuals and 730 distinct points in time from the first DAG
, shown in the Section containing the Covid example, takes only 1.11 seconds on average on the authors personal computer, using a single processing core (Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz, 32GB RAM). It is, however, also important to note that the runtime highly depends on how efficient the functions supplied to the prob_fun
arguments are written as well.
Below we give some additional limited code to illustrate the increased runtime of the sim_discrete_time()
function with increasing n_sim
and increasing max_t
values. As an example, we use the first DGP shown in the Section on the Covid example to generate some data for multiple different values of each argument. We include calls to the sim2data()
function in the runtime calculations, because calling this function is necessary to obtain usable data and as such should be considered part of the DGP. The runtime is calculated using the microbenchmark
package [@Mersmann2023].
# NOTE: This part of the code is not run here, because it would take too long # on CRAN and would introduce another dependency on the "microbenchmark" # package. Results may also vary depending on the hardware this is run on. library(microbenchmark) set.seed(1234) prob_myoc <- function(data, P_0, RR_A) { fifelse(data$A_event, P_0*RR_A, P_0) } run_example <- function(n, max_t) { dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=n, max_t=max_t) dat <- sim2data(sim, to="start_stop", target_event="Y", keep_only_first=TRUE, overlap=TRUE) } n <- c(10, 100, 1000, 10000, 100000) max_t <- c(10, 100, 1000, 10000) params <- data.frame(max_t=rep(max_t, each=length(n)), n=rep(n), time=NA) for (i in seq_len(nrow(params))) { n_i <- params$n[i] max_t_i <- params$max_t[i] bench <- microbenchmark(run_example(n=n_i, max_t=max_t_i), times=1) params$time[i] <- mean(bench$time / 1000000000) } params <- within(params, { max_t <- factor(max_t); }) ggplot(params, aes(x=n, y=time, color=max_t)) + geom_point() + geom_line() + theme_bw() + labs(x="n_sim", y="Runtime in Seconds", color="max_t") + scale_x_continuous(labels=scales::label_comma(), transform="log10") + scale_y_log10()
knitr::include_graphics("./images_v_joss/runtime.png")
The runtime increases more sharply with higher values of max_t
than it does with higher values of n_sim
, because each additional considered point in time internally translates to one more iteration in an R
for
loop, while each additional considered individual translates to one more row in the generated data, which is processed using optimized data.table
code directly.
In this article we presented the simDAG
R
package and illustrated how it can be used to generate various forms of artificial data that may be required for Monte-Carlo simulation studies. In particular, we showed how the package may be used to generate crossectional and longitudinal data with multiple variables of different data types, non-linear effects and arbitrary causal dependencies. We showed how similar the syntax is for very different kinds of DGP and how closely the required code resembles the actual underlying structural equations when using built-in node types. Because the package is based on defining a DAG to describe the DGP, it lends itself particularly well to simulation studies dealing with causal inference, but it is by no means limited to applications in this field, as shown in the second to last Section of the article.
In addition to the main data generation functions, the package also includes multiple functions to facilitate the accurate description of the underlying DGP. Such descriptions are of great importance to facilitate both understanding and reproducibility of simulation studies, as emphasized in the literature [@Morris2019; @Cheng2016]. Among these are the plot.DAG()
function, that was used throughout the article to graphically display the defined DAG
objects. While this function is useful on its own, some users may prefer to use one of the many other options to display DAGs in R
[@Pitts2024]. To make this easier for users the package also contains the dag2matrix()
function, which returns the underlying adjacency matrix of a DAG
object and the as.igraph.DAG()
function for direct integration with the igraph
R
package [@Csardi2024]. Additionally, the summary.DAG()
method may be used to directly print the used structural equations, as shown throughout the article.
The most distinguishing feature of the package is its capability of carrying out discrete-time simulations to generate longitudinal data with hundreds of points in time in a suitable amount of time. While other packages, such as the simcausal
package [@Sofrygin2017] offer similar features to generate crossectional data, its' implementation for generation of longitudinal data is very different from the proposed package. In simcausal
a new node is defined for each point in time internally. Although the user has direct access to each of these nodes (and therefore to each value at any point in time), the provided formula interface does not naturally support the definition of nodes with events that occur at some point in time and last for a certain amount of time. This can be done with little effort in the simDAG
package using the provided "time_to_event"
node type.
This type of node can then be used to specify outcomes or to specify binary time-varying covariates, as illustrated in the main text where we used two such nodes to describe both the Covid-19 vaccination status and the associated adverse event. Using just this node type it is therefore possible to define DGPs including a time-to-event outcome (possibly with recurrent events) with multiple time-varying covariates. While there are multiple other methods to generate some forms of time-to-event data with time-varying covariates [@Hendry2013; @Austin2012; @Huang2020; @Ngwa2022], most of them require strict parametrizations or do not support the use of multiple, arbitrarily distributed time-varying variables. Additionally, neither of these methods allows the inclusion of recurrent events or competing events. None of these restrictions apply to the discrete-time simulation approach.
However, the method also has two main disadvantages. First, it is a lot more computationally expensive than other methods, and secondly it does usually require the user to define appropriate functions to generate the time- and individual specific probabilities per node. Although the inherent computational complexity cannot be removed, it is alleviated in the implementation of this package through the use of optimized code and the data.table
back-end [@Barrett2024]. As shown in the article, the presented implementation allows generation of large datasets in a reasonable amount of time. A computationally more efficient alternative not considered in this package would be discrete-event simulation [@Banks2014], in which the time until the next event is modeled directly instead of simulating the entire process over time. Performing such simulations is, however, usually a lot more demanding both conceptually and in terms of required software development [@Zhang2018]. The burden of specifying appropriate input to the prob_fun
argument in our approach is comparatively small, but it might still be a concern for some users. We hope that the many provided examples and explanations in both this article and the extensive documentation and multiple vignettes of the package will help users overcome this issue.
To keep this article at a reasonable length, it was necessary to omit some implemented features of the simDAG
package. One of these features is its capability to generate data for complex multi-level DGPs. By internally using code from the simr
package [@Green2016], it allows users to add arbitrarily complex random effects and random slopes to nodes of type "gaussian"
, "binomial"
and "poisson"
. This can be done by directly adding the standard mixed model R
syntax to the formula
interface. Additionally, while the main focus of the package is on generating data from a user-defined DGP, the package offers limited support for generating data that mimics already existing data through the dag_from_data()
function. This function only requires the user to specify the causal dependencies assumed to be present in the data and the type of each node and then directly extracts coefficients and intercepts from the available data by fitting various models to it. The returned DAG is then fully specified and can be used directly in a standard sim_from_dag()
call to obtain data similar to the one supplied. Note that this function does not directly support all available node types. If the main goal of the user is to generate such synthetic mimicked datasets, using packages such as simPop
[@Templ2017] might be preferable.
Finally, we would like to note that the package is still under active development. We are currently working on multiple new features to make the package even more versatile for users. For example, future versions of the package are planned to support the definition of interventions on the DAG, much like the simcausal
package [@Sofrygin2017], which would make it even easier to generate data for causal inference based simulations, without having to re-define the DAG multiple times. We also plan to extend the internal library of available node types, by for example including node functions to simulate competing events data without the use of discrete-time simulation [@Morina2017; @Haller2014].
The results in this paper were obtained using R
r paste0(R.Version()$major, ".", R.Version()$minor)
with the data.table
r packageVersion("data.table")
package, the survival
r packageVersion("survival")
package, the igraph
r packageVersion("igraph")
package, the ggplot2
r packageVersion("ggplot2")
package and the simDAG
r packageVersion("simDAG")
package. R
itself and all packages used are available from the Comprehensive R
Archive Network (CRAN) at https://CRAN.R-project.org/.
We would like to thank Katharina Meiszl for her valuable input and for contributing some unit tests and input checks to the proposed package. We would also like to thank the anonymous reviewers, whose feedback and suggestions greatly improved the manuscript and the package itself.
As mentioned in the main text, there are a lot of possible features that could be included in the DGP using the discrete-time simulation approach, which are not shown in the main text. Below we will give some simple examples on how to use the discrete-time simulation approach implemented in the simDAG
package to generate artificial data with some of these useful features. For most of these examples, the first DGP shown in the Covid example Section will be extended.
To make $P_{Y0}$ time-dependent, the sim_time
argument may be used to change the definition of the function that generates the myocarditis probabilities. The following code gives an example for this:
prob_myoc <- function(data, P_0, RR_A, sim_time) { P_0 <- P_0 + 0.001*sim_time fifelse(data$A_event, P_0*RR_A, P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
In this example, the base probability of myocarditis grows by 0.001 with each passing day, but the relative risk of infection given a vaccination stays constant. More complex time-dependencies may of course be implemented as well. The same procedure could also be used to make the vaccination probabilities time-dependent as well.
In some scenarios it may also be required to make the effect itself vary over time. Here, we could make the size of the relative risk of developing a myocarditis given a recent Covid-19 vaccination dependent on the calender time, again by using the sim_time
argument in the respective prob_fun
:
prob_myoc <- function(data, P_0, RR_A, sim_time) { RR_A <- RR_A + 0.01*sim_time fifelse(data$A_event, P_0*RR_A, P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
Here, the relative risk increases by 0.01 for each passing day, meaning that the risk of a myocarditis given a recent vaccination increases over time.
In all previous examples, it was assumed that the effect of the vaccination on the probability of developing a myocarditis follows a step-function. The risk was instantly elevated by a constant factor ($RR_A$) after vaccination, which lasts for a specified amount of time and then instantly drops back to the baseline risk. Any other kind of relationship may also be simulated, by again changing the prob_myoc()
function accordingly. The following code may be used:
prob_myoc <- function(data, P_0, RR_A) { RR_A <- RR_A - 0.1*data$A_time_since_last fifelse(data$A_event, P_0*RR_A, P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150, time_since_last=TRUE) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event", "A_time_since_last"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
In this code, $RR_A$ decreases with each day after a person is vaccinated. In contrast to the previous example, this happens on a person-specific time-scale and not on a total calender time level. To do this properly, we set the time_since_last
argument inside the node_td()
call of the vaccination definition to TRUE
. This argument is supported when using nodes of type "time_to_event"
. It adds another column to the dataset which includes the time since the last vaccination was performed for each individual. This column is then also added in the parents
vector, so that we can use it in prob_myoc()
function. Here we simply substract 0.1 from $RR_A$ for each day after vaccination. This essentially means that on the day of vaccination itself, the relative risk will be 3.24 as specified in the DAG
. On the first day after the vaccination, however, the relative risk will only be 3.14 and so forth. In other words, the extent of the adverse effect of the vaccination decreases over time linearly. Again, more complex functions may also be used to model this type of non-linear effects.
Another possibility to extent the DGP would be to add more "time_to_event"
variables. For example, we may want to additionally consider the effect of a Covid-19 infection itself, denoted by $C_t$ here. For simplicity we will assume that Covid-19 has a constant probability of occurrence over time which is the same for all individuals.In this example we will assume that the vaccination reduces the risk of getting a Covid-19 infection to 0 for $d_{immune}$ days after the vaccination was performed. We may use the following structural equation to describe this variable:
$$ C_t \sim Bernoulli(P_{C}(t)), $$
with:
$$ P_{C}(t) = \begin{cases} 0, & \text{if } t \in \left[T_{A}(t), T_{A}(t) + d_{immune}\right] \ P_{C0}, & \text{otherwise} \end{cases}, $$
where $P_{C0}$ is the baseline probability of experiencing a Covid-19 infection and $T_A(t)$ is still defined to be the time of the last Covid-19 vaccination as before. In addition to this, we will also change the definition of the myocarditis node ($Y_t$). Instead of being only dependent on $A_t$, the Covid-19 Infection should now also raise the probability of developing a myocarditis by a constant factor $RR_C$ in the $d_{C.risk}$ days after the Covid-19 infection. The structural equation can then be changed to be:
$$ Y_t \sim Bernoulli(P_{Y}(t)), $$
with:
$$ P_{Y}(t) = \begin{cases} P_{Y0} \cdot RR_{A} \cdot RR_{C}, & \text{if } t \in \left[T_{A}(t), T_{A}(t) + d_{A.risk}\right] \text{ and } t \in \left[T_{A}(t), T_{A}(t) + d_{C.risk}\right] \ P_{Y0} \cdot RR_{C}, & \text{if } t \in \left[T_{A}(t), T_{A}(t) + d_{C.risk}\right] \ P_{Y0} \cdot RR_{A}, & \text{if } t \in \left[T_{A}(t), T_{A}(t) + d_{A.risk}\right] \ P_{Y0}, & \text{otherwise} \end{cases}, $$
where $d_{A.risk}$ is the duration after vaccination in which the risk of developing a myocarditis is elevated by $RR_A$. The structural equation for $A$ are the same as defined in previous Equations. The following code may be used to generate data from this DGP:
prob_myoc <- function(data, P_0, RR_A, RR_C) { P_0 * RR_A^(data$A_event) * RR_C^(data$C_event) } prob_covid <- function(data, P_0, d_immune) { fifelse(data$A_time_since_last < d_immune, 0, P_0, na=P_0) } dag <- empty_dag() + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150, time_since_last=TRUE) + node_td("C", type="time_to_event", prob_fun=prob_covid, parents=c("A_time_since_last"), event_duration=14, P_0=0.01, d_immune=120) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event", "A_time_since_last", "C_event"), P_0=0.005, RR_A=3.24, RR_C=2.5) sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
In this code we first re-define the prob_myoc()
function to include the effect of the Covid-19 infection. This utilizes a small computational trick, relying on the fact that any number to the power of 0 is 1 and any number to the power of 1 is itself. Since TRUE
is treated as a 1 and FALSE
is interpreted as a 0 in R, we can simply take the relative risks to the power of their current event indicator to only multiply the baseline risk with the respective relative risks if they are currently in the risk durations. Next, we define a new function called prob_covid()
to include the immunity duration, again using the time_since_last
functionality. By finally adding another node_td()
call to the DAG
object, we were able to specify the DGP in the desired way.
Previously, only time-dependent variable were included in the DGP. By adding calls to the simple node()
function to the DAG
object it is, however, also possible to additionally include time-independent variables as well. Suppose that we want the baseline probability $P_{A0}$ of the vaccination to vary by biological sex in the first DGP described in the Covid example Section. We could do this using the following code:
prob_myoc <- function(data, P_0, RR_A) { fifelse(data$A_event, P_0*RR_A, P_0) } prob_vacc <- function(data, P_0) { fifelse(data$Sex, P_0*2, P_0) } dag <- empty_dag() + node("Sex", type="rbernoulli", p=0.4) + node_td("A", type="time_to_event", prob_fun=0.01, event_duration=20, immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
Here, everything is kept the same as in the original example, with the small change that we now add a call to node()
before adding the time-dependent variables to define the Sex
node using a simple Bernoulli distribution. Additionally, we now had to define a function that appropriately generates the probabilities of vaccination per Sex
. This was done by simply increasing $P_0$ by a factor of 2 whenever the value of Sex
is TRUE
(which might stand for males or females). It would also be possible to add child time-independent child nodes as well, but this is left as an exercise to the interested reader.
The previous examples all included only binary time-dependent variables. For some applications it may be neccessary to used categorical time-dependent variables. For example, instead of generating the vaccination status as binary, we may want to model it as a categorical variable with multiple levels, indicating which kind of vaccine the person received (if any). This may be done using the "competing_events"
node type
. Below is a simple example:
prob_myoc <- function(data, P_0, RR_A) { fifelse(data$A_event > 0, P_0*RR_A, P_0) } prob_vacc <- function(data) { n <- nrow(data) p_mat <- matrix(c(rep(0.99, n), rep(0.0005, n), rep(0.0005, n)), byrow=FALSE, ncol=3) return(p_mat) } dag <- empty_dag() + node_td("A", type="competing_events", prob_fun=prob_vacc, event_duration=c(20, 20), immunity_duration=150) + node_td("Y", type="time_to_event", prob_fun=prob_myoc, parents=c("A_event"), P_0=0.005, RR_A=3.24) sim <- sim_discrete_time(dag, n_sim=100, max_t=200, save_states="all") data <- sim2data(sim, to="start_stop") head(data)
The "competing_events"
node type
works in much the same way as the "time_to_event"
node type, with the main difference being that instead of using Bernoulli trials it relies on multinomial trials, which are equivalent to drawing a simple random sample with unequal probabilities for each element. Instead of supplying a single probability of success per person and per point in time, the prob_fun
supplied to a "competing_events"
node is therefore expected to provide a vector of probabilities. In the example above, we simply define the probabilities to be the same for everyone regardless of the simulated time. The first category represents "no vaccination" while the next two categories specify vaccinations with vaccines of different kinds. Note that in this code, the prob_myoc()
function only checks whether there was any vaccination, so it does not really make a difference whether one uses a "competing_events"
or "time_to_event"
node here.
To get more useful person specific multinomial probabilities, the node_multinomial()
function may be useful. Because the underlying code for multinomial trials is a bit more complex than the code for simple Bernoulli trials, using this type of node may lead to an increased runtime. The node type is usually most useful when the goal is to generate artificial time-to-event data with mutually exclusive types of events, also known as competing events or competing risks data. Also note that in the example we had to set the save_states
argument of the sim_discrete_time()
function to "all"
. This is required when the goal is to transform the data into the start-stop format later, because the DGP does not consist solely of time-dependent nodes of type "time_to_event"
.
It is also possible to use continuous variables as time-dependent variables in the proposed package. The following code gives a very simple example:
dag <- empty_dag() + node("calories", type="rnorm", mean=2500, sd=150) + node_td("calories", type="gaussian", formula= ~ 1 + calories*1.1, error=1) sim <- sim_discrete_time(dag, n_sim=100, max_t=200, save_states="all") data <- sim2data(sim, to="long") head(data)
In this example, we first generate a normally distributed root node called calories
using a standard node()
call. This represents the value of the variable at $t = 0$. If we did not specify this variable, the code would return an error message at $t = 1$, because there would be no value of calories
to use in the subsequently defined regression model. Next, a call to node_td()
is added to the DAG
object to specify how this variable changes with each step in time. We use a simple linear regression model, where the only independent variable is the last value of the variable itself. In the subsequent simulation we again set save_states="all"
to specify that the data at all points in time should be saved. This is necessary here both because the DAG
consists of time-dependent nodes that are not of type "time_to_event"
, and because the variable changes at every point in time for every individual. Because of the nature of the data, it would also not make sense to transform the data into the start-stop format. Instead, the long format is choosen here.
The following code gives an example on how events could be simulated so that a specific order of events is always respected:
prob_bachelors <- function(data) { fifelse(data$highschool_event, 0.01, 0) } prob_masters <- function(data) { fifelse(data$bachelors_event, 0.01, 0) } dag <- empty_dag() + node_td("highschool", type="time_to_event", prob_fun=0.01, event_duration=Inf) + node_td("bachelors", type="time_to_event", prob_fun=prob_bachelors, event_duration=Inf, parents="highschool_event") + node_td("masters", type="time_to_event", prob_fun=prob_masters, event_duration=Inf, parents="bachelors_event") sim <- sim_discrete_time(dag, n_sim=100, max_t=200) data <- sim2data(sim, to="start_stop") head(data)
In this specification, we try to simulate the educational status of a person over time. We assume that individuals can only obtain a bachelors degree once they have finished high school and that they can only receive a masters degree once they finished the bachelors degree. To keep this order of events in tact, we can split the variable into its three constituent parts. First, the indicator whether someone graduated highschool
is simulated using simple Bernoulli trials as implemented in the "time_to_event"
node type. By setting event_duration=Inf
, we ensure that no one ever "looses" their degree. Only afterwards do we generate whether the same person also received a bachelors
degree. By using a standard fifelse()
call, we can easily specify that the probability of obtaining a bachelors
degree is 0 for individuals without a highschool
degree. Subsequently, we do the same for the masters
node.
Again, this simulation could be made much more realistic. For example, in the example used here there is no limitations on how close the different graduations can be to each other. An individual might receive all three degrees in the same time unit. This could be changed by using the sim_time
argument in the definition of the probability functions, as discussed earlier.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.