Simulating data with known properties is an essential step in the development of new statistical methods. Simulating survival data, however, is more challenging than most simulation tasks. To describe this difficulty and our approach to overcoming it, we quote from Harden and Kropko (2018):
Typical methods for generating . . . durations often employ known distributions - such as the exponential, Weibull, or Gompertz — that imply specific shapes for the baseline hazard function. This approach is potentially problematic because it contradicts a key advantage of the Cox model — the ability to leave the distribution of the baseline hazard unspecified. By restricting the simulated data to a known (usually parametric) form, these studies impose an assumption that is not required in applications of the Cox model. In applied research, the baseline hazard in data can exhibit considerable heterogeneity, both across the many fields which employ the Cox model and within a given field . . . . Thus, simulating durations from one specific parametric distribution may not adequately approximate the data that many applied researchers analyze, reducing the simulation’s generalizability.
Here we address these problems by introducing a novel method for simulating durations without specifying a particular distribution for the baseline hazard function. Instead, the method generates — at each iteration of the simulation — a unique baseline hazard by fitting a cubic spline to randomly-drawn points. This produces a wide variety of shapes for the baseline hazard, including those that are unimodal, multimodal, monotonically increasing or decreasing, and other shapes. The method then generates a density function based on each baseline hazard and draws durations accordingly. Because the shape of the baseline hazard can vary considerably, this approach matches the Cox model’s inherent flexibility. By remaining agnostic about the distribution of the baseline hazard, our method better constructs the assumed data generating process (DGP) of the Cox model. Moreover, repeating this process over many iterations in a simulation yields more heterogeneity in the simulated samples of data, which matches the fact that applied researchers analyze a wide variety of data with the Cox model. This increases the generalizability of the simulation results.
In this vignette we demonstrate the uses of the sim.survdata()
function for generating survival data. To begin, we load the coxed
library:
library(coxed)
The flexible-hazard method described by Harden and Kropko (2018) first generates a baseline failure CDF: it plots points at (0, 0) and (T
+1, 1), and it plots knots
additional points with x-coordinates drawn uniformly from integers in [2, T
] and y-coordinates drawn from U[0, 1]. It sorts these coordinates in ascending order (because a CDF must be non-decreasing) and if spline=TRUE
it fits a spline using Hyman’s (1983) cubic smoothing function to preserve the CDF’s monotonicity. Next it constructs the failure-time PDF by computing the first differences of the CDF at each time point. It generates the survivor function by subtracting the failure CDF from 1. Finally, it computes the baseline hazard by dividing the failure PDF by the survivor function. Full technical details are listed in Harden and Kropko (2018).
To generate a survival dataset, use the sim.survdata()
function, with the num.data.frames
argument set to 1. Here we generate a single survival dataset with 1000 observations, in which durations can fall on any integer between 1 and 100:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=1)
The simulated data has several important attributes:
attributes(simdata)
The full survival dataset, including the durations (y
), the censoring variable (failed
), and the covariates, is contained in the data frame object stored in the data
attribute:
head(simdata$data, 10)
The xdata
attribute only contains the covariates:
head(simdata$xdata, 10)
The baseline
attribute contains the baseline functions -- failure PDF, failure CDF, survivor, and hazard -- generated by the flexible-hazard method:
head(simdata$baseline, 10)
As we demonstrate below, we can use the survsim.plot()
function to plot these baseline functions.
The betas
attribute contains the coefficients used in the simulation to generate the durations:
simdata$betas
These coefficients are the "true" coefficients in a data generating process, and a survival model such as the Cox proportional hazards model produces estimates of these coefficients. In this case, the Cox model estimates the coefficients as follows:
model <- coxph(Surv(y, failed) ~ X1 + X2 + X3, data=simdata$data) model$coefficients
The coefficients and the covariates together form values of the linear predictor, contained in the xb
attribute, and exponentiated values of the linear predictor, contained in the exp.xb
attribute:
head(cbind(simdata$xb, simdata$exp.xb))
In the course of generating simulations, the flexible-hazard method requires expressing each individual observation's survivor function. These functions are contained in the rows of the ind.survive
attribute, a matrix with N
rows and T
columns. For example, here's the survivor function for observation 1:
simdata$ind.survive[1,]
The last two attributes, marg.effect
and marg.effect.data
, are discussed further below.
In a simulation, researchers generally want to create many iterations of simulated data to draw inferences from the sampling variation that produces each dataset. To generate multiple datasets, set the num.data.frames
argument to the desired number of iterations. For example, to create 2 datasets, type:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=2) summary(simdata)
The output is now a list of two objects, where each object contains another version of the attributes described above. For example, to the see the data for the second iteration, type
head(simdata[[2]]$data, 10)
In the above example we created two datasets to reduce the time needed to compile this vignette, but for research applications this number can be set as large as a researcher wants, even to num.data.frames=1000
for example.
The flexible-hazard method is designed to avoid assuming a particular parametric form for the hazard function, but it still employs a hazard function. To see the baseline functions, including hazard, use the survsim.plot()
function. The first argument is the output of the sim.survdata()
function. Every simulation iteration produces different durations (and if fixed.hazard=FALSE
, different hazard functions), so it is necessary to use the df
argument specify which simulation iteration to use when plotting histograms and baseline functions. To see the baseline functions for the first simulation iteration, specify type="baseline"
as follows:
survsim.plot(simdata, df=1, type="baseline")
To see the histograms of the simulated durations, linear predictors, and exponentiated linear predictors, specify type="hist"
:
survsim.plot(simdata, df=1, type="hist")
And to see both the baseline functions and the histograms, specify type="both"
:
survsim.plot(simdata, df=1, type="both")
The survsim.plot()
function also has a bins
argument, which allows the user to change the number of bins in these histograms (the default is 30).
The sim.survdata()
function has a number of arguments that allow the user to change parameters of the simulation.
The N
argument changes the number of observations in each simulated dataset, the T
argument changes the maximum possible survival time, xvars
sets the number of covariates, and the censor
argument specifies the proportion of observations that should be designated as right-censored. By default, N=1000
, T=100
, xvars=3
, and censor=.1
. To generate data with 700 observations, with durations ranging from 1 to 250, with 5 covariates and 20% of observations right-censored, type:
simdata <- sim.survdata(N=700, T=250, xvars=5, censor=.2, num.data.frames = 1) summary(simdata$data)
By default the coefficients are drawn from normal distributions, however the user may specify different coefficients with the beta
argument if he or she wishes:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=1, beta=c(1, -1.5, 2)) simdata$betas
Another way that a user can specify coefficients is to let these coefficients be dependent on time. An example of that is provided below.
By default, the covariates are drawn from standard normal distributions, but the sim.survdata()
function allows a user to specify different data for the covariates using the X
argument. Suppose, for example, that we want to use the data on the length of time needed for governing coalitions in European democracies to conclude negotiations from Martin and Vanberg (2003). The data are stored in the coxed
package as martinvanberg
:
summary(martinvanberg)
We will use the postel
, rgovm
, and pgovno
variables to create a data frame that we pass to the X
argument:
mv.data <- dplyr::select(martinvanberg, postel, rgovm, pgovno) simdata <- sim.survdata(T=100, X=mv.data, num.data.frames = 1)
The data now contain the exogenous covariates and the simulated duration and censoring variables:
head(simdata$data)
The sim.survdata()
function also generates a marginal change in duration, conditional on a particular covariate, so that researchers can compare the performance of estimators of this statistic using simulated data. The function calculates simulated durations for each observation conditional on a baseline hazard function and exogenous covariates and coefficients. The \code{covariate} argument specifies the variable in the X matrix to vary so as to measure the marginal effect. First the covariate is set to the value specified in \code{low} for all observations, then to the value specified in \code{high} for all observations. Given each value, new durations are drawn. The durations when the covariate equals the low value are subtracted from the durations when the covariate equals the high value. The marginal effect is calculated by employing the statistic given by \code{compare}, which is \code{median} by default. The marginal effect itself is stored in the marg.effect
attribute, and the durations and covariates for the high and low profiles are stored as a list in the marg.effect.data
attribute.
For example, suppose we are interested in comparing the duration when X1 is 1 to the duration when X1 is 0. We specify covariate=1
to indicate that X1
is the variable whose values should be fixed to the ones specified with low=0
and high=1
. In this example we fix the coefficients so that X1
has a large effect, to ensure that we see a larger marginal effect. To simulate with these parameters, we type:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=1, covariate=1, low=0, high=1, beta = c(2, .1, .1))
The simulated marginal effect is
simdata$marg.effect
The covariates and durations for each of the low and high conditions are stored in marg.effect.data
:
head(simdata$marg.effect.data$low$x) head(simdata$marg.effect.data$low$y) head(simdata$marg.effect.data$high$x) head(simdata$marg.effect.data$high$y)
A researcher might want to hold the hazard function fixed from one iteration to the next in order to isolate the effect of different factors. Alternatively, a researcher might want the hazard to vary from one iteration to the next in order to examine a range of possibilities within one analysis and to avoid making any restrictive assumptions about hazard. If fixed.hazard=TRUE
, one baseline hazard is generated and the same function is used to generate all of the simulated datasets. If fixed.hazard=FALSE
(the default), a new hazard function is generated with each simulation iteration. To illustrate, we create two simulated datasets, twice - once with fixed.hazard=TRUE
and once with fixed.hazard=FALSE
. If fixed.hazard=TRUE
, both data frames yield the same baseline functions:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=2, fixed.hazard=TRUE) survsim.plot(simdata, df=1, type="baseline") survsim.plot(simdata, df=2, type="baseline")
If fixed.hazard=FALSE
, the two data frames yield different baseline functions:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=2, fixed.hazard=FALSE) survsim.plot(simdata, df=1, type="baseline") survsim.plot(simdata, df=2, type="baseline")
An important assumption of many survival models is that the mechanism that causes some observations to be right-censored is independent from the covariates in the model. Some researchers, studying the effects of violations of the assumptions of a model, may want to dispel this assumption in the data generating process to see the effect on the model estimates. In the sim.survdata()
function, if censor.cond=FALSE
then a proportion of the observations specified by censor
is randomly and uniformly selected to be right-censored. If censor.cond=TRUE
then censoring depends on the covariates as follows: new coefficients are drawn from normal distributions with mean 0 and standard deviation of 0.1, and these new coefficients are used to create a new linear predictor using the X
matrix. The observations with the largest (100 x censor)
percent of the linear predictors are designated as right-censored. In other words, if censor.cond=FALSE
, then a logit model in which the censoring indicator is regressed on the covariates should yield null results:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=1, censor.cond=FALSE) logit <- glm(failed ~ X1 + X2 + X3, data=simdata$data, family=binomial(link="logit")) summary(logit)
If, however, censor.cond=TRUE
, then a logit model in which the censoring indicator is regressed on the covariates should yield significant results:
simdata <- sim.survdata(N=1000, T=100, num.data.frames=1, censor.cond=TRUE) logit <- glm(failed ~ X1 + X2 + X3, data=simdata$data, family=binomial(link="logit")) summary(logit)
For some applications, a researcher may want to specify his or her own hazard function rather than relying on the flexible-hazard method of generating hazard functions and drawing durations. The sim.survdata()
allows a user to specify a customized hazard function with the hazard.fun
argument, which takes a function with one argument that represents time. For example, suppose we want a lognormal hazard function with a mean of 50 and a standard deviation of 10. We can specify that function as follows:
my.hazard <- function(t){ dnorm((log(t) - log(50))/log(10)) / (log(10)*t*(1 - pnorm((log(t) - log(50))/log(10)))) }
We can then pass that function to sim.survdata()
. Note that the lognormal hazard function does not equal 0 when T=100
. That means that many observations will be naturally right-censored above and beyond the amount of right-censoring we've specified with the censor
argument (a proportion of .1 by default), simply because we haven't allowed the simulated durations to extend to later time points. To reduce the number of additional right-censored observations, we increase T
to 1000 to allow for the longer durations that the log-normal hazard function implies. Note, the sim.survdata()
produces a warning with the number of additional right-censored observations:
simdata <- sim.survdata(N=1000, T=1000, num.data.frames=1, hazard.fun = my.hazard)
To see the hazard, we can use the simsurv.plot()
function with type="baseline"
:
survsim.plot(simdata, df=1, type="baseline")
Time-varying covariates require a different data structure. The Surv()
function in the survival
package sets up the dependent variable for a Cox model. Generally it has two arguments: duration time and a censoring variable. But for time-varying covariates it replaces the duration argument with two time arguments, representing the start and end of discrete intervals, which allows a covariate to take on different values across different intervals for the same observation.
The sim.survdata()
function generates data with a time-varying covariate if type="tvc"
. Durations are drawn again using proportional hazards, and are passed to the permalgorithm()
function in the PermAlgo
package to generate the time-varying data structure (Sylvestre and Abrahamowicz 2008). If type="tvc"
, the sim.survdata()
function cannot accept user-supplied data for the covariates, as a time-varying covariate is expressed over time frames which themselves convey part of the variation of the durations, and we are generating these durations. If user-supplied X
data is provided, the function passes a warning and generates random data instead as if \code{X=NULL}.
To generate survival data with 1000 observations, a maximum duration of 100, and time-varying covariates, type:
simdata <- sim.survdata(N=1000, T=100, type="tvc", num.data.frames=1) head(simdata$data, 20)
The Cox model assumes proportional hazards, and one way to dispel this assumption is to allow the coefficients to vary over time. The sim.survdata()
function generates data using time-varying coefficients if the type="tvbeta"
argument is specified. If this option is specified, the first coefficient, whether coefficients are user-supplied or randomly generated, is interacted with the natural log of the time counter from 1 to T
(the maximum possible duration). Durations are generated via proportional hazards, and coefficients are saved as a matrix to illustrate their dependence on time. To generate data with time-varying coefficients, type:
simdata <- sim.survdata(N=1000, T=100, type="tvbeta", num.data.frames = 1)
The coefficients are saved as a matrix with a column to represent time:
head(simdata$betas, 10)
An alternative approach to specifying time-varying coefficients is for the user to supply the matrix of time-varying coefficients. With this approach the coefficient matrix must have the same number of rows as the maximum duration specified with T
, which in this case is 100. Suppose that we specify three coefficients for three covariates, over the time frame from 1 to 100. We want the first coefficient to be given by $$\beta_{1_t} = \frac{(t - 25)^2}{2500},$$
and the second and third coefficients to be given by .5 and -.25 respectively:
beta.mat <- data.frame(beta1 = (1:100 - 25)^2/2500, beta2 = .5, beta3 = -.25) head(beta.mat)
We pass this matrix to the sim.survdata()
function through the beta
argument:
simdata <- sim.survdata(N=1000, T=100, type="tvbeta", beta=beta.mat, num.data.frames = 1)
The data from this simulation are as follows:
head(simdata$data, 10)
And the coefficients are as we specified earlier:
head(simdata$betas, 10)
Harden, J. J. and Kropko, J. (2018). "Simulating Duration Data for the Cox Model."" Political Science Research and Methods https://doi.org/10.1017/psrm.2018.19
Hyman, J. M. (1983) "Accurate Monotonicity Preserving Cubic Interpolation." SIAM J. Sci. Stat. Comput. 4, 645–654. https://doi.org/10.1137/0904045
Martin, L. W and Vanberg, G. (2003) "Wasting Time? The Impact of Ideology and Size on Delay in Coalition Formation." British Journal of Political Science 33 323-344 https://doi.org/10.1017/S0007123403000140
Sylvestre M.-P., Abrahamowicz M. (2008) :"Comparison of Algorithms to Generate Event Times Conditional on Time-Dependent Covariates." Statistics in Medicine 27(14):2618–34. https://doi.org/10.1002/sim.3092
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.