crisk2.bart: BART for competing risks

View source: R/crisk2.bart.R

crisk2.bartR Documentation

BART for competing risks

Description

Here we have implemented another approach to utilize BART for competing risks that is very flexible, and is akin to discrete-time survival analysis. Following the capabilities of BART, we allow for maximum flexibility in modeling the dependence of competing failure times on covariates. In particular, we do not impose proportional hazards.

Similar to crisk.bart, we utilize two BART models, yet they are two different BART models than previously considered. First, given an event of either cause occurred, we employ a typical binary BART model to discriminate between cause 1 and 2. Next, we proceed as if it were a typical survival analysis with BART for an absorbing event from either cause.

To elaborate, consider data in the form: (s_i, \delta_i, {x}_i) where s_i is the event time; \delta_i is an indicator distinguishing events, \delta_i=h due to cause h in {1, 2}, from right-censoring, \delta_i=0; {x}_i is a vector of covariates; and i=1, ..., N indexes subjects. We denote the K distinct event/censoring times by 0<t_{(1)}<...<t_{(K)}<\infty thus taking t_{(j)} to be the j^{th} order statistic among distinct observation times and, for convenience, t_{(0)}=0.

First, consider event indicators for an event from either cause: y_{1ij} for each subject i at each distinct time t_{(j)} up to and including the subject's last observation time s_i=t_{(n_i)} with n_i=\arg \max_j [t_{(j)}\leq s_i]. We denote by p_{1ij} the probability of an event at time t_{(j)} conditional on no previous event. We now write the model for y_{1ij} as a nonparametric probit (or logistic) regression of y_{1ij} on the time t_{(j)} and the covariates {x}_{1i}, and then utilize BART for binary responses. Specifically, y_{1ij}\ =\ I[\delta_i>0] I[s_i=t_{(j)}],\ j=1, ..., n_i. Therefore, we have p_{1ij} = F(mu_{1ij}),\ mu_{1ij} = mu_1+f_1(t_{(j)}, {x}_{1i}) where F denotes the Normal (or Logistic) cdf.

Next, we denote by p_{2i} the probability of a cause 1 event at time s_i conditional on an event having occurred. We now write the model for y_{2i} as a nonparametric probit (or logistic) regression of y_{2i} on the time s_i and the covariates {x}_{2i}, via BART for binary responses. Specifically, y_{2i}\ =\ I[\delta_i=1]. Therefore, we have p_{2i} = F(mu_{2i}),\ mu_{2i} = mu_2+f_2(s_i, {x}_{2i}) where F denotes the Normal (or Logistic) cdf. Although, we modeled p_{2i} at the time of an event, s_i, we can estimate this probability at any other time points on the grid via p(t_{(j)}, x_2)=F( mu_2+f_2(t_{(j)}, {x}_2)). Finally, based on these probabilities, p_{hij}, we can construct targets of inference such as the cumulative incidence functions.

Usage


crisk2.bart(x.train=matrix(0,0,0), y.train=NULL,
           x.train2=x.train, y.train2=NULL,
           times=NULL, delta=NULL, K=NULL,
           x.test=matrix(0,0,0), x.test2=x.test, 
           sparse=FALSE, theta=0, omega=1,
           a=0.5, b=1, augment=FALSE,
           rho=NULL, rho2=NULL,
           xinfo=matrix(0,0,0), xinfo2=matrix(0,0,0),
           usequants=FALSE, 
           rm.const=TRUE, type='pbart',
           ntype=as.integer(
               factor(type, levels=c('wbart', 'pbart', 'lbart'))),
           k=2, power=2, base=0.95,
           offset=NULL, offset2=NULL,
           tau.num=c(NA, 3, 6)[ntype],
           
           ntree=50, numcut=100, ndpost=1000, nskip=250,
           keepevery = 10L,
           
           
           
           
           printevery=100L, 
           
           id=NULL,    ## crisk2.bart only
           seed=99,    ## mc.crisk2.bart only
           mc.cores=2, ## mc.crisk2.bart only
           nice=19L    ## mc.crisk2.bart only
          )

mc.crisk2.bart(x.train=matrix(0,0,0), y.train=NULL,
              x.train2=x.train, y.train2=NULL,
              times=NULL, delta=NULL, K=NULL,
              x.test=matrix(0,0,0), x.test2=x.test, 
              sparse=FALSE, theta=0, omega=1,
              a=0.5, b=1, augment=FALSE,
              rho=NULL, rho2=NULL,
              xinfo=matrix(0,0,0), xinfo2=matrix(0,0,0),
              usequants=FALSE, 
              rm.const=TRUE, type='pbart',
              ntype=as.integer(
                  factor(type, levels=c('wbart', 'pbart', 'lbart'))),
              k=2, power=2, base=0.95,
              offset=NULL, offset2=NULL,
              tau.num=c(NA, 3, 6)[ntype],
              
              ntree=50, numcut=100, ndpost=1000, nskip=250,
              keepevery = 10L,
              
              
              
              printevery=100L, 
              
              id=NULL,    ## crisk2.bart only
              seed=99,    ## mc.crisk2.bart only
              mc.cores=2, ## mc.crisk2.bart only
              nice=19L    ## mc.crisk2.bart only
             )

Arguments

x.train

Covariates for training (in sample) data for an event.
Must be a data.frame or a matrix with rows corresponding to observations and columns to variables.
crisk2.bart will generate draws of f_1(t, x) for each x which is a row of x.train (note that the definition of x.train is dependent on whether y.train has been specified; see below).

y.train

Event binary response for training (in sample) data.
If y.train is NULL, then y.train (x.train and x.test, if specified) are generated by a call to surv.pre.bart (which require that times and delta be provided: see below); otherwise, y.train (x.train and x.test, if specified) are utilized as given assuming that the data construction has already been performed.

x.train2

Covariates for training (in sample) data of for a cause 1 event. Similar to x.train above.

y.train2

Cause 1 event binary response for training (in sample) data. Similar to y.train above.

times

The time of event or right-censoring, s_i.
If y.train is NULL, then times (and delta) must be provided.

delta

The event indicator: 1 for cause 1, 2 for cause 2 and 0 is censored.
If y.train is NULL, then delta (and times) must be provided.

K

If provided, then coarsen times per the quantiles 1/K, 2/K, ..., K/K.

x.test

Covariates for test (out of sample) data of an event.
Must be a data.frame or a matrix and have the same structure as x.train.
crisk2.bart will generate draws of f_1(t, x) for each x which is a row of x.test.

x.test2

Covariates for test (out of sample) data of a cause 1 event. Similar to x.test above.

sparse

Whether to perform variable selection based on a sparse Dirichlet prior; see Linero 2016.

theta

Set theta parameter; zero means random.

omega

Set omega parameter; zero means random.

a

Sparse parameter for Beta(a, b) prior: 0.5<=a<=1 where lower values inducing more sparsity.

b

Sparse parameter for Beta(a, b) prior; typically, b=1.

rho

Sparse parameter: typically rho=p where p is the number of covariates in x.train.

rho2

Sparse parameter: typically rho2=p where p is the number of covariates in x.train2.

augment

Whether data augmentation is to be performed in sparse variable selection.

xinfo

You can provide the cutpoints to BART or let BART choose them for you. To provide them, use the xinfo argument to specify a list (matrix) where the items (rows) are the covariates and the contents of the items (columns) are the cutpoints.

xinfo2

Cause 2 cutpoints.

usequants

If usequants=FALSE, then the cutpoints in xinfo are generated uniformly; otherwise, if TRUE, uniform quantiles are used for the cutpoints.

rm.const

Whether or not to remove constant variables.

type

Whether to employ probit BART via Albert-Chib, 'pbart', or logistic BART by Holmes-Held, 'lbart'.

ntype

The integer equivalent of type where 'wbart' is 1, 'pbart' is 2 and 'lbart' is 3.

k

k is the number of prior standard deviations f_h(t, x) is away from +/-3. The bigger k is, the more conservative the fitting will be.

power

Power parameter for tree prior.

base

Base parameter for tree prior.

offset

Cause 1 binary offset.

offset2

Cause 2 binary offset.

tau.num

The numerator in the tau definition.

ntree

The number of trees in the sum.

numcut

The number of possible values of cutpoints (see usequants). If a single number if given, this is used for all variables. Otherwise a vector with length equal to ncol(x.train) is required, where the i^{th} element gives the number of cutpoints used for the i^{th} variable in x.train. If usequants is FALSE, numcut equally spaced cutoffs are used covering the range of values in the corresponding column of x.train. If usequants is TRUE, then min(numcut, the number of unique values in the corresponding columns of x.train - 1) cutpoint values are used.

ndpost

The number of posterior draws returned.

nskip

Number of MCMC iterations to be treated as burn in.

keepevery

Every keepevery draw is kept to be returned to the user.

printevery

As the MCMC runs, a message is printed every printevery draws.

id

crisk2.bart only: unique identifier added to returned list.

seed

mc.crisk2.bart only: seed required for reproducible MCMC.

mc.cores

mc.crisk2.bart only: number of cores to employ in parallel.

nice

mc.crisk2.bart only: set the job niceness. The default niceness is 19: niceness goes from 0 (highest priority) to 19 (lowest priority).

Value

crisk2.bart returns an object of type crisk2bart which is essentially a list. Besides the items listed below, the list has offset, offset2, times which are the unique times, K which is the number of unique times, tx.train and tx.test, if any.

yhat.train

A matrix with ndpost rows and nrow(x.train) columns. Each row corresponds to a draw f^*_1 from the posterior of f_1 and each column corresponds to a row of x.train. The (i,j) value is f^*_1(t, x) for the i^{th} kept draw of f_1 and the j^{th} row of x.train. Burn-in is dropped.

yhat.test

Same as yhat.train but now the x's are the rows of the test data.

surv.test

test data fits for the survival function, S(t, x).

surv.test.mean

mean of surv.test over the posterior samples.

prob.test

The probability of suffering an event.

prob.test2

The probability of suffering a cause 1 event.

cif.test

The cumulative incidence function of cause 1, F_1(t, x).

cif.test2

The cumulative incidence function of cause 2, F_2(t, x).

cif.test.mean

mean of cif.test columns for cause 1.

cif.test2.mean

mean of cif.test2 columns for cause 2.

varcount

a matrix with ndpost rows and nrow(x.train) columns. Each row is for a draw. For each variable (corresponding to the columns), the total count of the number of times this variable is used for an event in a tree decision rule (over all trees) is given.

varcount2

For each variable the total count of the number of times this variable is used for a cause 1 event in a tree decision rule is given.

See Also

surv.pre.bart, predict.crisk2bart, mc.crisk2.pwbart, crisk.bart

Examples


data(transplant)

pfit <- survfit(Surv(futime, event) ~ abo, transplant)

# competing risks for type O
plot(pfit[4,], xscale=7, xmax=735, col=1:3, lwd=2, ylim=c(0, 1),
       xlab='t (weeks)', ylab='Aalen-Johansen (AJ) CI(t)')
    legend(450, .4, c("Death", "Transplant", "Withdrawal"), col=1:3, lwd=2)
## plot(pfit[4,], xscale=30.5, xmax=735, col=1:3, lwd=2, ylim=c(0, 1),
##        xlab='t (months)', ylab='Aalen-Johansen (AJ) CI(t)')
##     legend(450, .4, c("Death", "Transplant", "Withdrawal"), col=1:3, lwd=2)

delta <- (as.numeric(transplant$event)-1)
## recode so that delta=1 is cause of interest; delta=2 otherwise
delta[delta==1] <- 4
delta[delta==2] <- 1
delta[delta>1] <- 2
table(delta, transplant$event)

times <- pmax(1, ceiling(transplant$futime/7)) ## weeks
##times <- pmax(1, ceiling(transplant$futime/30.5)) ## months
table(times)

typeO <- 1*(transplant$abo=='O')
typeA <- 1*(transplant$abo=='A')
typeB <- 1*(transplant$abo=='B')
typeAB <- 1*(transplant$abo=='AB')
table(typeA, typeO)

x.train <- cbind(typeO, typeA, typeB, typeAB)

x.test <- cbind(1, 0, 0, 0)
dimnames(x.test)[[2]] <- dimnames(x.train)[[2]]

##test BART with token run to ensure installation works
set.seed(99)
post <- crisk2.bart(x.train=x.train, times=times, delta=delta,
                   x.test=x.test, nskip=1, ndpost=1, keepevery=1)

## Not run: 

## run one long MCMC chain in one process
## set.seed(99)
## post <- crisk2.bart(x.train=x.train, times=times, delta=delta, x.test=x.test)

## in the interest of time, consider speeding it up by parallel processing
## run "mc.cores" number of shorter MCMC chains in parallel processes
post <- mc.crisk2.bart(x.train=x.train, times=times, delta=delta,
                      x.test=x.test, seed=99, mc.cores=8)

K <- post$K

typeO.cif.mean <- apply(post$cif.test, 2, mean)
typeO.cif.025 <- apply(post$cif.test, 2, quantile, probs=0.025)
typeO.cif.975 <- apply(post$cif.test, 2, quantile, probs=0.975)

plot(pfit[4,], xscale=7, xmax=735, col=1:3, lwd=2, ylim=c(0, 0.8),
       xlab='t (weeks)', ylab='CI(t)')
points(c(0, post$times)*7, c(0, typeO.cif.mean), col=4, type='s', lwd=2)
points(c(0, post$times)*7, c(0, typeO.cif.025), col=4, type='s', lwd=2, lty=2)
points(c(0, post$times)*7, c(0, typeO.cif.975), col=4, type='s', lwd=2, lty=2)
     legend(450, .4, c("Transplant(BART)", "Transplant(AJ)",
                       "Death(AJ)", "Withdrawal(AJ)"),
            col=c(4, 2, 1, 3), lwd=2)
##dev.copy2pdf(file='../vignettes/figures/liver-BART.pdf')
## plot(pfit[4,], xscale=30.5, xmax=735, col=1:3, lwd=2, ylim=c(0, 0.8),
##        xlab='t (months)', ylab='CI(t)')
## points(c(0, post$times)*30.5, c(0, typeO.cif.mean), col=4, type='s', lwd=2)
## points(c(0, post$times)*30.5, c(0, typeO.cif.025), col=4, type='s', lwd=2, lty=2)
## points(c(0, post$times)*30.5, c(0, typeO.cif.975), col=4, type='s', lwd=2, lty=2)
##      legend(450, .4, c("Transplant(BART)", "Transplant(AJ)",
##                        "Death(AJ)", "Withdrawal(AJ)"),
##             col=c(4, 2, 1, 3), lwd=2)


## End(Not run)

BART documentation built on June 22, 2024, 11:33 a.m.