comp: compare survival curves

View source: R/comp.R

compR Documentation

compare survival curves

Description

compare survival curves

Usage

comp(x, ...)

## S3 method for class 'ten'
comp(x, ..., p = 1, q = 1, scores = seq.int(attr(x, "ncg")), reCalc = FALSE)

Arguments

x

A tne object

...

Additional arguments (not implemented).

p

p for Fleming-Harrington test

q

q for Fleming-Harrington test

scores

scores for tests for trend

reCalc

Recalcuate the values?
If reCalc=FALSE (the default) and the ten object already has the calculated values stored as an attribute, the value of the attribute is returned directly.

Details

The log-rank tests are formed from the following elements, with values for each time where there is at least one event:

  • W[i], the weights, given below.

  • e[i], the number of events (per time).

  • P[i], the number of predicted events, given by predict.

  • COV[, , i], the covariance matrix for time i, given by COV.

It is calculated as:

Q[i] = sum(W[i] * (e[i] - P[i]))^T * sum(W[i] * COV[, , i] * W[i])^-1 * sum(W[i] * (e[i] - P[i]))

If there are K groups, then K-1 are selected (arbitrary).
Likewise the corresponding variance-covariance matrix is reduced to the appropriate K-1 * K-1 dimensions.
Q is distributed as chi-square with K-1 degrees of freedom.

For 2 covariate groups, we can use:

  • e[i] the number of events (per time).

  • e[i] the number at risk overall.

  • e1[i] the number of events in group 1.

  • n1[i] the number at risk in group 1.

Then:

Q = sum(W[i] * (e1[i] - n1[i] * e[i] / n[i])) / sqrt(sum(W[i]^2 * e1[i] / e[i] * (1 - n1[i] / n[i]) * (n[i] - e[i] / (n[i] - 1)) *e[i]))

Below, for the Fleming-Harrington weights, S(t) is the Kaplan-Meier (product-limit) estimator.
Note that both p and q need to be >=0.

The weights are given as follows:

1 log-rank
n[i] Gehan-Breslow generalized Wilcoxon
sqrt(n[i]) Tarone-Ware
S1[i] Peto-Peto's modified survival estimate S1(t) = cumprod(1 - e / (n + 1))
S2[i] modified Peto-Peto (by Andersen) S2(t) = S1[i] * n[i] / (n[i] + 1)
FH[i] Fleming-Harrington The weight at t_0 = 1 and thereafter is: S(t[i - 1])^p * (1 - S(t)[i - 1]^q)

The supremum (Renyi) family of tests are designed to detect differences in survival curves which cross.
That is, an early difference in survival in favor of one group is balanced by a later reversal.
The same weights as above are used.
They are calculated by finding

Z(t[i]) = SUM W(t[k]) [ e1[k] - n1[k]e[k]/n[k] ]

(which is similar to the numerator used to find Q in the log-rank test for 2 groups above).
and it's variance:

simga^2(tau) = sum(k=1, 2, ..., tau) W(t[k]) (n1[k] * n2[k] * (n[k] - e[k]) * e[k] / n[k]^2 * (n[k] - 1) ]

where tau is the largest t where both groups have at least one subject at risk.

Then calculate:

Q = sup( |Z(t)| ) / sigma(tau), t < tau

When the null hypothesis is true, the distribution of Q is approximately

Q ~ sup( |B(x)|, 0 <= x <= 1)

And for a standard Brownian motion (Wiener) process:

Pr[sup|B(t)| > x] = 1 - 4 / pi sum((-1)^k / (2 * k + 1) * exp(-pi^2 (2k + 1)^2 / x^2))

Tests for trend are designed to detect ordered differences in survival curves.
That is, for at least one group:

S1(t) >= S2(t) >= ... >= SK(t) for t <= tau

where tau is the largest t where all groups have at least one subject at risk. The null hypothesis is that

S1(t) = S2(t) = ... = SK(t) for t <= tau

Scores used to construct the test are typically s = 1,2,...,K, but may be given as a vector representing a numeric characteristic of the group.
They are calculated by finding:

Z[t(i)] = sum(W[t(i)] * (e[j](i) - n[j](i) * e(i) / n(i)))

The test statistic is:

Z = sum(j=1, ..., K) s[j] * Z[j] / sum(j=1, ..., K) sum(g=1, ..., K) s[j] * s[g] * sigma[jg]

where sigma is the the appropriate element in the variance-covariance matrix (see COV).
If ordering is present, the statistic Z will be greater than the upper alpha-th percentile of a standard normal distribution.

Value

The tne object is given additional attributes.
The following are always added:

lrt

The log-rank family of tests

lrw

The log-rank weights (used in calculating the tests).

An additional item depends on the number of covariate groups.
If this is =2:

sup

The supremum or Renyi family of tests

and if this is >2:

tft

Tests for trend. This is given as a list, with the statistics and the scores used.

Note

Regarding the Fleming-Harrington weights:

  • p = q = 0 gives the log-rank test, i.e. W=1

  • p=1, q=0 gives a version of the Mann-Whitney-Wilcoxon test (tests if populations distributions are identical)

  • p=0, q>0 gives more weight to differences later on

  • p>0, q=0 gives more weight to differences early on

The example using alloauto data illustrates this. Here the log-rank statistic has a p-value of around 0.5 as the late advantage of allogenic transplants is offset by the high early mortality. However using Fleming-Harrington weights of p=0, q=0.5, emphasising differences later in time, gives a p-value of 0.04.
Stratified models (stratTen) are not yet supported.

References

Gehan A. A Generalized Wilcoxon Test for Comparing Arbitrarily Singly-Censored Samples. Biometrika 1965 Jun. 52(1/2):203–23. http://www.jstor.org/stable/2333825 JSTOR

Tarone RE, Ware J 1977 On Distribution-Free Tests for Equality of Survival Distributions. Biometrika;64(1):156–60. http://www.jstor.org/stable/2335790 JSTOR

Peto R, Peto J 1972 Asymptotically Efficient Rank Invariant Test Procedures. J Royal Statistical Society 135(2):186–207. http://www.jstor.org/stable/2344317 JSTOR

Fleming TR, Harrington DP, O'Sullivan M 1987 Supremum Versions of the Log-Rank and Generalized Wilcoxon Statistics. J American Statistical Association 82(397):312–20. http://www.jstor.org/stable/2289169 JSTOR

Billingsly P 1999 Convergence of Probability Measures. New York: John Wiley & Sons. http://dx.doi.org/10.1002/9780470316962 Wiley (paywall)

Examples

## Two covariate groups
data("leukemia", package="survival")
f1 <- survfit(Surv(time, status) ~ x, data=leukemia)
comp(ten(f1))
## K&M 2nd ed. Example 7.2, Table 7.2, pp 209--210.
data("kidney", package="KMsurv")
t1 <- ten(Surv(time=time, event=delta) ~ type, data=kidney)
comp(t1, p=c(0, 1, 1, 0.5, 0.5), q=c(1, 0, 1, 0.5, 2))
## see the weights used
attributes(t1)$lrw
## supremum (Renyi) test; two-sided; two covariate groups
## K&M 2nd ed. Example 7.9, pp 223--226.
data("gastric", package="survMisc")
g1 <- ten(Surv(time, event) ~ group, data=gastric)
comp(g1)
## Three covariate groups
## K&M 2nd ed. Example 7.4, pp 212-214.
data("bmt", package="KMsurv")
b1 <- ten(Surv(time=t2, event=d3) ~ group, data=bmt)
comp(b1, p=c(1, 0, 1), q=c(0, 1, 1))
## Tests for trend
## K&M 2nd ed. Example 7.6, pp 217-218.
data("larynx", package="KMsurv")
l1 <- ten(Surv(time, delta) ~ stage, data=larynx)
comp(l1)
attr(l1, "tft")
### see effect of F-H test
data("alloauto", package="KMsurv")
a1 <- ten(Surv(time, delta) ~ type, data=alloauto)
comp(a1, p=c(0, 1), q=c(1, 1))


survMisc documentation built on April 7, 2022, 5:06 p.m.