recur.bart | R Documentation |
Here we have implemented a simple and direct approach to utilize BART in survival analysis that is very flexible, and is akin to discrete-time survival analysis. Following the capabilities of BART, we allow for maximum flexibility in modeling the dependence of survival times on covariates. In particular, we do not impose proportional hazards.
To elaborate, consider data in the usual form:
(t_i, \delta_i, {x}_i)
where t_i
is the event time,
\delta_i
is an indicator distinguishing events
(\delta=1
) from right-censoring
(\delta=0
), {x}_i
is a vector of covariates, and
i=1, ..., N
indexes subjects.
We denote the K
distinct event/censoring times by
0<t_{(1)}<...<t_{(K)}<\infty
thus
taking t_{(j)}
to be the j^{th}
order
statistic among distinct observation times and, for convenience,
t_{(0)}=0
. Now consider event indicators y_{ij}
for each subject i
at each distinct time t_{(j)}
up to and including the subject's observation time
t_i=t_{(n_i)}
with
n_i=\sum_j I[t_{(j)}\leq t_i]
.
This means y_{ij}=0
if j<n_i
and
y_{in_i}=\delta_i
.
We then denote by p_{ij}
the probability
of an event at time t_{(j)}
conditional on no previous event. We
now write the model for y_{ij}
as a nonparametric probit
regression of y_{ij}
on the time t_{(j)}
and the covariates
{x}_i
, and then utilize BART for binary responses. Specifically,
y_{ij}\ =\ \delta_i I[t_i=t_{(j)}],\ j=1, ..., n_i
; we have
p_{ij} = F(\mu_{ij}),\ \mu_{ij} = \mu_0+f(t_{(j)}, {x}_i)
where F
denotes the standard normal cdf (probit link).
As in the binary
response case, f
is the sum of many tree models.
recur.bart(x.train=matrix(0,0,0),
y.train=NULL, times=NULL, delta=NULL,
x.test=matrix(0,0,0), x.test.nogrid=FALSE,
sparse=FALSE, theta=0, omega=1,
a=0.5, b=1, augment=FALSE, rho=NULL,
xinfo=matrix(0,0,0), usequants=FALSE,
rm.const=TRUE, type='pbart',
ntype=as.integer(
factor(type, levels=c('wbart', 'pbart', 'lbart'))),
k=2, power=2, base=0.95,
offset=NULL, tau.num=c(NA, 3, 6)[ntype],
ntree=50, numcut = 100L, ndpost=1000, nskip=250,
keepevery=10,
printevery = 100L,
keeptrainfits = TRUE,
seed=99, ## mc.recur.bart only
mc.cores=2, ## mc.recur.bart only
nice=19L ## mc.recur.bart only
)
mc.recur.bart(x.train=matrix(0,0,0),
y.train=NULL, times=NULL, delta=NULL,
x.test=matrix(0,0,0), x.test.nogrid=FALSE,
sparse=FALSE, theta=0, omega=1,
a=0.5, b=1, augment=FALSE, rho=NULL,
xinfo=matrix(0,0,0), usequants=FALSE,
rm.const=TRUE, type='pbart',
ntype=as.integer(
factor(type, levels=c('wbart', 'pbart', 'lbart'))),
k=2, power=2, base=0.95,
offset=NULL, tau.num=c(NA, 3, 6)[ntype],
ntree=50, numcut = 100L, ndpost=1000, nskip=250,
keepevery=10,
printevery = 100L,
keeptrainfits = TRUE,
seed=99, ## mc.recur.bart only
mc.cores=2, ## mc.recur.bart only
nice=19L ## mc.recur.bart only
)
x.train |
Explanatory variables for training (in sample)
data. |
y.train |
Binary response dependent variable for training (in sample) data. |
times |
The time of event or right-censoring. |
delta |
The event indicator: 1 is an event while 0 is censored. |
x.test |
Explanatory variables for test (out of sample) data. |
x.test.nogrid |
Occasionally, you do not need the entire time grid for
|
sparse |
Whether to perform variable selection based on a sparse Dirichlet prior rather than simply uniform; see Linero 2016. |
theta |
Set |
omega |
Set |
a |
Sparse parameter for |
b |
Sparse parameter for |
rho |
Sparse parameter: typically |
augment |
Whether data augmentation is to be performed in sparse variable selection. |
xinfo |
You can provide the cutpoints to BART or let BART
choose them for you. To provide them, use the |
usequants |
If |
rm.const |
Whether or not to remove constant variables. |
type |
Whether to employ Albert-Chib, |
ntype |
The integer equivalent of |
k |
k is the number of prior standard deviations |
power |
Power parameter for tree prior. |
base |
Base parameter for tree prior. |
offset |
With binary
BART, the centering is |
tau.num |
The numerator in the |
ntree |
The number of trees in the sum. |
numcut |
The number of possible values of c (see usequants).
If a single number if given, this is used for all variables.
Otherwise a vector with length equal to ncol(x.train) is required,
where the |
ndpost |
The number of posterior draws returned. |
nskip |
Number of MCMC iterations to be treated as burn in. |
keepevery |
Every keepevery draw is kept to be returned to the user. |
printevery |
As the MCMC runs, a message is printed every printevery draws. |
keeptrainfits |
Whether to keep |
seed |
|
mc.cores |
|
nice |
|
recur.bart
returns an object of type recurbart
which is
essentially a list. Besides the items listed
below, the list has a binaryOffset
component giving the value
used, a times
component giving the unique times, K
which is the number of unique times, tx.train
and
tx.test
, if any.
yhat.train |
A matrix with ndpost rows and nrow(x.train) columns.
Each row corresponds to a draw |
haz.train |
The hazard function, |
cum.train |
The cumulative hazard function, |
yhat.test |
Same as yhat.train but now the x's are the rows of the test data. |
haz.test |
The hazard function, |
cum.test |
The cumulative hazard function, |
varcount |
a matrix with ndpost rows and nrow(x.train) columns. Each row is for a draw. For each variable (corresponding to the columns), the total count of the number of times that variable is used in a tree decision rule (over all trees) is given. |
Note that yhat.train and yhat.test are
f(t, x)
+ binaryOffset
. If you want draws of the probability
P(Y=1 | t, x)
you need to apply the normal cdf (pnorm
)
to these values.
recur.pre.bart
, predict.recurbart
,
recur.pwbart
, mc.recur.pwbart
## load 20 percent random sample
data(xdm20.train)
data(xdm20.test)
data(ydm20.train)
##test BART with token run to ensure installation works
## with current technology even a token run will violate CRAN policy
## set.seed(99)
## post <- recur.bart(x.train=xdm20.train, y.train=ydm20.train,
## nskip=1, ndpost=1, keepevery=1)
## Not run:
## set.seed(99)
## post <- recur.bart(x.train=xdm20.train, y.train=ydm20.train,
## keeptrainfits=TRUE)
## larger data sets can take some time so, if parallel processing
## is available, submit this statement instead
post <- mc.recur.bart(x.train=xdm20.train, y.train=ydm20.train,
keeptrainfits=TRUE, mc.cores=8, seed=99)
require(rpart)
require(rpart.plot)
post$yhat.train.mean <- apply(post$yhat.train, 2, mean)
dss <- rpart(post$yhat.train.mean~xdm20.train)
rpart.plot(dss)
## for the 20 percent sample, notice that the top splits
## involve cci_pvd and n
## for the full data set, notice that all splits
## involve ca, cci_pud, cci_pvd, ins270 and n
## (except one at the bottom involving a small group)
## compare patients treated with insulin (ins270=1) vs
## not treated with insulin (ins270=0)
N <- 50 ## 50 training patients and 50 validation patients
K <- post$K ## 798 unique time points
NK <- 50*K
## only testing set, i.e., remove training set
xdm20.test. <- xdm20.test[NK+1:NK, post$rm.const]
xdm20.test. <- rbind(xdm20.test., xdm20.test.)
xdm20.test.[ , 'ins270'] <- rep(0:1, each=NK)
## multiple threads will be utilized if available
pred <- predict(post, xdm20.test., mc.cores=8)
## create Friedman's partial dependence function for the
## relative intensity for ins270 by time
M <- nrow(pred$haz.test) ## number of MCMC samples
RI <- matrix(0, M, K)
for(j in 1:K) {
h <- seq(j, NK, by=K)
RI[ , j] <- apply(pred$haz.test[ , h+NK]/
pred$haz.test[ , h], 1, mean)
}
RI.lo <- apply(RI, 2, quantile, probs=0.025)
RI.mu <- apply(RI, 2, mean)
RI.hi <- apply(RI, 2, quantile, probs=0.975)
plot(post$times, RI.hi, type='l', lty=2, log='y',
ylim=c(min(RI.lo, 1/RI.hi), max(1/RI.lo, RI.hi)),
xlab='t', ylab='RI(t, x)',
sub='insulin(ins270=1) vs. no insulin(ins270=0)',
main='Relative intensity of hospital admissions for diabetics')
lines(post$times, RI.mu)
lines(post$times, RI.lo, lty=2)
lines(post$times, rep(1, K), col='darkgray')
## RI for insulin therapy seems fairly constant with time
mean(RI.mu)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.