additivePenal | R Documentation |
Fit an additive frailty model using a semiparametric penalized likelihood estimation or a parametric estimation. The main issue in a meta-analysis study is how to take into account the heterogeneity between trials and between the treatment effects across trials. Additive models are proportional hazard model with two correlated random trial effects that act either multiplicatively on the hazard function or in interaction with the treatment, which allows studying for instance meta-analysis or multicentric datasets. Right-censored data are allowed, but not the left-truncated data. A stratified analysis is possible (maximum number of strata = 2). This approach is different from the shared frailty models.
In an additive model, the hazard function for the jth subject in the ith trial with random trial effect ui as well as the random treatment-by-trial interaction vi is:
where \lambda
0(0) is the baseline hazard function, \beta
k the
fixed effect associated to the covariate Xijk (k=1,..,p),
\beta
1 is the treatment effect and Xij1 the treatment
variable. \rho
is the corresponding correlation coefficient for the two frailty terms.
additivePenal(formula, data, correlation = FALSE, recurrentAG =
FALSE, cross.validation = FALSE, n.knots, kappa, maxit = 350, hazard =
"Splines", nb.int, LIMparam = 1e-4, LIMlogl = 1e-4, LIMderiv = 1e-3,
print.times = TRUE)
formula |
a formula object, with the response on the left of a
|
data |
a 'data.frame' with the variables used in 'formula'. |
correlation |
Logical value. Are the random effects correlated? If so, the correlation coefficient is estimated. The default is FALSE. |
recurrentAG |
Always FALSE for additive models (left-truncated data are not allowed). |
cross.validation |
Logical value. Is cross validation procedure used for estimating smoothing parameter in the penalized likelihood estimation? If so a search of the smoothing parameter using cross validation is done, with kappa as the seed. The cross validation is not implemented for two strata. The default is FALSE. |
n.knots |
integer giving the number of knots to use. Value required in the penalized likelihood estimation. It corresponds to the (n.knots+2) splines functions for the approximation of the hazard or the survival functions. Number of knots must be between 4 and 20. (See Note) |
kappa |
positive smoothing parameter in the penalized likelihood
estimation. In a stratified additive model, this argument must be a vector
with kappas for both strata. The coefficient kappa of the integral of the
squared second derivative of hazard function in the fit. To obtain an
initial value for |
maxit |
maximum number of iterations for the Marquardtt algorithm. Default is 350 |
hazard |
Type of hazard functions: "Splines" for semiparametric hazard functions with the penalized likelihood estimation, "Piecewise-per" for piecewise constant hazards functions using percentile, "Piecewise-equi" for piecewise constant hazard functions using equidistant intervals, "Weibull" for parametric Weibull functions. Default is "Splines". |
nb.int |
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
LIMparam |
Convergence threshold of the Marquardt algorithm for the
parameters (see Details), |
LIMlogl |
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), |
LIMderiv |
Convergence threshold of the Marquardt algorithm for the
gradient (see Details), |
print.times |
a logical parameter to print iteration process. Default is TRUE. |
The estimated parameter are obtained by maximizing the penalized
log-likelihood or by a simple log-likelihood (in the parametric case) using
the robust Marquardtt algorithm (Marquardtt,1963). The parameters are
initialized with values obtained with Cox proportional hazard model. The
iterations are stopped when the difference between two consecutive
loglikelhoods was small (<10^{-4})
, the estimated coefficients were
stable (consecutive values (<10^{-4})
, and the gradient small enough
(<10^{-3})
. To be sure of having a positive function at all stages of
the algorithm, the spline coefficients were reparametrized to be positive at
each stage. The variance space of the two random effects is reduced, so the
variances are positive, and the correlation coefficient values are
constrained to be between -1 and 1. The marginal log-likelihood depends on
integrations that are approximated by using the Laplace integration
technique with a first order approximation. The smoothing parameter can be
fixed or estimated by maximizing likelihood cross-validation criterion. The
usual squared Wald statistic was modified to a mixture of two \chi^2
distribution to get significance test for the variance of the random
effects.
INITIAL VALUES
The splines and the regression coefficients are initialized to 0.1. An adjusted Cox model is fitted, it provides new initial values for the splines coefficients and the regression coefficients. The variances of the frailties are initialized to 0.1. Then an additive frailty model with independent frailties is fitted. At last, an additive frailty model with correlated frailties is fitted.
An additive model or more generally an object of class 'additivePenal'. Methods defined for 'additivePenal' objects are provided for print, plot and summary.
b |
sequence of the corresponding estimation of the splines coefficients, the random effects variances and the regression coefficients. |
call |
The code used for fitting the model. |
coef |
the regression coefficients. |
cov |
covariance between the two frailty terms
|
cross.Val |
Logical value. Is cross validation procedure used for estimating the smoothing parameters in the penalized likelihood estimation? |
correlation |
Logical value. Are the random effects correlated? |
DoF |
degrees of freedom associated with the "kappa". |
formula |
the formula part of the code used for the model. |
groups |
the maximum number of groups used in the fit. |
kappa |
A vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components. |
loglikPenal |
the complete marginal penalized log-likelihood in the semiparametric case. |
loglik |
the marginal log-likelihood in the parametric case. |
n |
the number of observations used in the fit. |
n.events |
the number of events observed in the fit. |
n.iter |
number of iterations needed to converge. |
n.knots |
number of knots for estimating the baseline functions. |
n.strat |
number of stratum. |
rho |
the corresponding correlation coefficient for the two frailty terms. |
sigma2 |
Variance for the random intercept (the random effect associated to the baseline hazard functions). |
tau2 |
Variance for the random slope (the random effect associated to the treatment effect across trials). |
varH |
the variance matrix of all parameters before positivity constraint transformation (Sigma2, Tau2, the regression coefficients and the spline coefficients). Then after, the delta method is needed to obtain the estimated variance parameters. |
varHIH |
the robust estimation of the variance matrix of all parameters (Sigma2, Tau2, the regression coefficients and the spline coefficients). |
varSigma2 |
The variance of the estimates of "sigma2". |
varTau2 |
The variance of the estimates of "tau2". |
varcov |
Variance of the estimates of "cov". |
x |
matrix of times where both survival and hazard functions are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times. |
lam |
array (dim=3) of hazard estimates and confidence bands. |
surv |
array (dim=3) of baseline survival estimates and confidence bands. |
median |
The value of the median survival and its confidence bands. If there are two stratas or more, the first value corresponds to the value for the first strata, etc. |
type.of.hazard |
Type of hazard functions (0:"Splines", "1:Piecewise", "2:Weibull"). |
type.of.Piecewise |
Type of Piecewise hazard functions (1:"percentile", 0:"equidistant"). |
nbintervR |
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi"). |
npar |
number of parameters. |
nvar |
number of explanatory variables. |
noVar |
indicator of explanatory variable. |
LCV |
the approximated likelihood cross-validation criterion in the semiparametric case (with H minus the converged Hessian matrix, and l(.) the full log-likelihood).
|
AIC |
the Akaike information Criterion for the parametric case.
|
n.knots.temp |
initial value for the number of knots. |
shape.weib |
shape parameter for the Weibull hazard function. |
scale.weib |
scale parameter for the Weibull hazard function. |
martingale.res |
martingale residuals for each cluster. |
frailty.pred |
empirical Bayes prediction of the first frailty term. |
frailty.pred2 |
empirical Bayes prediction of the second frailty term. |
linear.pred |
linear predictor: uses simply "Beta'X + u_i + v_i * X_1" in the additive Frailty models. |
global_chisq |
a vector with the values of each multivariate Wald test. |
dof_chisq |
a vector with the degree of freedom for each multivariate Wald test. |
global_chisq.test |
a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise. |
p.global_chisq |
a vector with the p_values for each global multivariate Wald test. |
names.factor |
Names of the "as.factor" variables. |
Xlevels |
vector of the values that factor might have taken. |
contrasts |
type of contrast for factor variable. |
beta_p.value |
p-values of the Wald test for the estimated regression coefficients. |
"kappa" and "n.knots" are the arguments that the user have to change if the fitted model does not converge. "n.knots" takes integer values between 4 and 20. But with n.knots=20, the model would take a long time to converge. So, usually, begin first with n.knots=7, and increase it step by step until it converges. "kappa" only takes positive values. So, choose a value for kappa (for instance 10000), and if it does not converge, multiply or divide this value by 10 or 5 until it converges.
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
V. Rondeau, S. Michiels, B. Liquet, and J. P. Pignon (2008). Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by mean of the maximum penalized likelihood approach. Statistics in Medecine, 27, 1894-1910.
slope
###--- Additive model with 1 covariate ---###
data(dataAdditive)
modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+
var1+slope(var1),correlation=TRUE,data=dataAdditive,
n.knots=8,kappa=10000)
#-- Var1 is boolean as a treatment variable
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.