power.glm.fixed.a0 | R Documentation |
Power/type I error calculation for generalized linear models with fixed a_0
using power priors
power.glm.fixed.a0(
data.type,
data.link = "",
data.size,
n = 1,
borrow.treat = FALSE,
treat.assign.prob = 0.5,
historical = list(),
nullspace.ineq = ">",
x.samples = matrix(),
samp.prior.beta,
samp.prior.var = 0,
lower.limits = rep(-100, 50),
upper.limits = rep(100, 50),
slice.widths = rep(1, 50),
delta = 0,
gamma = 0.95,
nMC = 10000,
nBI = 250,
N = 10000,
approximate = FALSE,
nNR = 10000,
tol = 1e-05
)
data.type |
Character string specifying the type of response. The options are "Normal", "Bernoulli", "Binomial", "Poisson" and "Exponential". |
data.link |
Character string specifying the link function. The options are "Logistic", "Probit", "Log", "Identity-Positive", "Identity-Probability" and "Complementary Log-Log". Does not apply if |
data.size |
Sample size of the simulated datasets. |
n |
(For binomial data only) vector of integers specifying the number of subjects who have a particular value of the covariate vector. If the data is binary and all covariates are discrete, collapsing Bernoulli data into a binomial structure can make the slice sampler much faster.
The sum of |
borrow.treat |
Logical value indicating whether the historical information is used to inform the treatment effect parameter. The default value is FALSE. If TRUE, the first column of the historical covariate matrix must be the treatment indicator. If FALSE, the historical covariate matrix must NOT have the treatment indicator, since the historical data is assumed to be from the control group only. |
treat.assign.prob |
Probability of being assigned to the treatment group. The default value is 0.5. Only applies if |
historical |
(Optional) list of historical dataset(s). East historical dataset is stored in a list which contains three named elements:
For binomial data, an additional element
|
nullspace.ineq |
Character string specifying the inequality of the null hypothesis. The options are ">" and "<". If ">" is specified, the null hypothesis is |
x.samples |
(Only applies when there is no historical dataset) matrix of possible values of covariates from which covariate vectors are sampled with replacement. |
samp.prior.beta |
Matrix of possible values of |
samp.prior.var |
Vector of possible values of |
lower.limits |
Vector of lower limits for parameters to be used by the slice sampler. The length of the vector should be equal to the total number of parameters, i.e. P+1 where P is the number of covariates. The default is -100 for all parameters (may not be appropriate for all situations). Does not apply if |
upper.limits |
Vector of upper limits for parameters to be used by the slice sampler. The length of the vector should be equal to the total number of parameters, i.e. P+1 where P is the number of covariates. The default is 100 for all parameters (may not be appropriate for all situations). Does not apply if |
slice.widths |
Vector of initial slice widths for parameters to be used by the slice sampler. The length of the vector should be equal to the total number of parameters, i.e. P+1 where P is the number of covariates. The default is 1 for all parameter (may not be appropriate for all situations). Does not apply if |
delta |
Prespecified constant that defines the boundary of the null hypothesis. The default is zero. |
gamma |
Posterior probability threshold for rejecting the null. The null hypothesis is rejected if posterior probability is greater |
nMC |
Number of iterations (excluding burn-in samples) for the slice sampler or Gibbs sampler. The default is 10,000. |
nBI |
Number of burn-in samples for the slice sampler or Gibbs sampler. The default is 250. |
N |
Number of simulated datasets to generate. The default is 10,000. |
approximate |
Logical value indicating whether the approximation method based on asymptotic theory is used. The default is FALSE. If TRUE, an approximation method based on the Newton-Raphson algorithm (assuming canonical links) is used. This feature helps users quickly obtain a rough estimate of the sample size required for the desired level of power or type I error rate. |
nNR |
(Only applies if |
tol |
(Only applies if |
If historical datasets are provided, the algorithm samples with replacement from the historical covariates to construct the simulated datasets.
Otherwise, the algorithm samples with replacement from x.samples
. One of the arguments historical
and x.samples
must be provided.
The sampling prior for the treatment parameter can be generated from a normal distribution (see examples).
For example, suppose one wants to compute the power for the hypotheses H_0: \beta_1 \ge 0
and H_1: \beta_1 < 0.
To approximate the sampling prior for \beta_1
, one can simply sample from a normal distribution with negative mean,
so that the mass of the prior falls in the alternative space. Conversely, to compute the type I error rate, one can
sample from a normal distribution with positive mean, so that the mass of the prior falls in the null space.
The sampling prior for the other parameters can be generated by using the glm.fixed.a0
function with current.data
set to FALSE.
The posterior samples based on only historical data can be used as a discrete approximation to the sampling prior.
samp.prior.var
is necessary for generating normally distributed data.
If data.type
is "Normal", the response y_i
is assumed to follow N(x_i'\beta, \tau^{-1})
where x_i
is the vector of covariates for subject i
.
Each historical dataset D_{0k}
is assumed to have a different precision parameter \tau_k
.
The initial prior for \tau
is the Jeffery's prior, \tau^{-1}
, and the initial prior for \tau_k
is \tau_k^{-1}
.
The initial prior for \beta
is the uniform improper prior. Posterior samples are obtained through Gibbs sampling.
For all other data types, posterior samples are obtained through slice sampling. The default lower limits for the parameters are -100. The default upper limits for the parameters are 100. The default slice widths for the parameters are 1. The defaults may not be appropriate for all situations, and the user can specify the appropriate limits and slice width for each parameter.
If a sampling prior with support in the null space is used, the value returned is a Bayesian type I error rate. If a sampling prior with support in the alternative space is used, the value returned is a Bayesian power.
Because running power.glm.fixed.a0()
and power.glm.random.a0()
is potentially time-consuming,
an approximation method based on asymptotic theory has been implemented for the model with fixed a_0
.
In order to attain the exact sample size needed for the desired power, the user can start with the approximation
to get a rough estimate of the sample size required, using power.glm.fixed.a0()
with approximate=TRUE
.
The function returns a S3 object with a summary
method. Power or type I error is returned, depending on the sampling prior used.
The posterior probabilities of the alternative hypothesis are returned.
The average posterior mean of \beta
and its corresponding bias are returned.
If data.type
is "Normal", average posterior means of \tau
and \tau_k
's (if historical data is given) are also returned.
The first column of \beta
contains posterior samples of the intercept. The second column contains posterior samples of \beta_1
, the parameter for the treatment indicator.
Chen, Ming-Hui, et al. "Bayesian design of noninferiority trials for medical devices using historical data." Biometrics 67.3 (2011): 1163-1170.
Neal, Radford M. Slice sampling. Ann. Statist. 31 (2003), no. 3, 705–767.
glm.fixed.a0
data.type <- "Bernoulli"
data.link <- "Logistic"
data.size <- 100
# Simulate two historical datasets
p <- 3
historical <- list(list(y0=rbinom(data.size,size=1,prob=0.2),
x0=matrix(rnorm(p*data.size),ncol=p,nrow=data.size), a0=0.2),
list(y0=rbinom(data.size, size=1, prob=0.5),
x0=matrix(rnorm(p*data.size),ncol=p,nrow=data.size), a0=0.3))
# Generate sampling priors
# The null hypothesis here is H0: beta_1 >= 0. To calculate power,
# we can provide samples of beta_1 such that the mass of beta_1 < 0.
# To calculate type I error, we can provide samples of beta_1 such that
# the mass of beta_1 >= 0.
samp.prior.beta1 <- rnorm(100, mean=-3, sd=1)
# Here, mass is put on the alternative region, so power is calculated.
samp.prior.beta <- cbind(rnorm(100), samp.prior.beta1, matrix(rnorm(100*p), 100, p))
nMC <- 100 # nMC should be larger in practice
nBI <- 50
N <- 5 # N should be larger in practice
result <- power.glm.fixed.a0(data.type=data.type, data.link=data.link,
data.size=data.size, historical=historical,
samp.prior.beta=samp.prior.beta,
delta=0, nMC=nMC, nBI=nBI, N=N)
summary(result)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.