deepAFT | R Documentation |
Fit a deep learning survival regression model. These are location-scale models for an arbitrary transform of the time variable; the most common cases use a log transformation, leading to accelerated failure time models.
deepAFT(x, ...)
## S3 method for class 'formula'
deepAFT(formula, model, data, control = list(...), method =
c("BuckleyJames", "ipcw", "transform"), ...)
## Default S3 method:
deepAFT(x, y, model, control, ...)
## S3 method for class 'ipcw'
deepAFT(x, y, model, control, ...)
# use:
# deepAFT.ipcw(x, y, model, control)
# or
# class(x) = "ipcw"
# deepAFT(x, y, model, control)
#
## S3 method for class 'trans'
deepAFT(x, y, model, control, ...)
# use:
# class(x) = "transform"
# deepAFT(x, y, model, control)
formula |
a formula expression as for other regression models. The response is usually a survival object as returned by the 'Surv' function. See the documentation for 'Surv', 'lm' and 'formula' for details. |
model |
deep neural network model, see below for details. |
data |
a data.frame in which to interpret the variables named in the formula. |
x |
Covariates for the AFT model |
y |
Surv object for the AFT model |
method |
methods to handle censoring data in deep AFT model fit, 'BuckleyJames' for Buckley and James method, 'ipcw' for inverse probability censoring weights method. 'transform' for transformation based on book of Fan and Gijbels (1996, page 168) |
control |
a list of control values, in the format produced by 'dnnControl'. The default value 'dnnControl()' |
... |
optional arguments |
See "Deep learning with R" for details on how to build a deep learning model.
The following parameters in 'dnnControl' will be used to control the model fit process.
'epochs': number of deep learning epochs, default is 100.
'batch_size': batch size, default is 128. 'NaN' may be generated if batch size is too small and there is not event in a batch.
'verbose': verbose = 1 for print out verbose during the model fit, 0 for not print.
'epsilon': epsilon for convergence check, default is epsilon = 0.001.
'max.iter': number of maximum iteration, default is max.iter = 100.
'censor.groups': a vector for censoring groups. A KM curve for censoring will be fit for each group. If a matrix is provided, then a Cox model will be used to predict the censoring probability.
When the variance for covariance matrix X is too large, please use xbar = apply(x, 2, stndx) to standardize X.
An object of class "deepAFT" is returned. The deepAFT object contains the following list components:
x |
Covariates for the AFT model |
y |
Survival object for the AFT model, y = Surv(time, event) |
model |
A fitted artificial neural network (ANN) model |
mean.ipt |
mean survival or censoring time |
predictor |
predictor score mu = f(x) |
risk |
risk score = exp(predictor) |
method |
method for deepAFT fitting, either Buckley-James, IPCW or transformed model |
For right censored survival time only
Chen, B. E. and Norman P.
Buckley, J. and James, I. (1979). Linear regression with cencored data. Biometrika, 66, page 429-436.
Norman, P. Li, W., Jiang, W. and Chen, B. E. (2024). DeepAFT: A nonparametric accelerated failure time model with artificial neural network. Manuscript submitted to Statistics in Medicine.
Chollet, F. and Allaire J. J. (2017). Deep learning with R. Manning.
print.deepAFT
, survreg
, ibs.deepAFT
## Example for deep learning model for AFT survival data
set.seed(101)
### define model layers
model = dNNmodel(units = c(4, 3, 1), activation = c("elu", "sigmoid", "sigmoid"),
input_shape = 3)
x = matrix(runif(15), nrow = 5, ncol = 3)
time = exp(x[, 1])
status = c(1, 0, 1, 1, 1)
fit = deepAFT(Surv(time, status) ~ x, model)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.