power: Simulated power

View source: R/power.r

powerR Documentation

Simulated power

Description

A method to calculate power for objects returned by sim_log_lognormal(), sim_nb(), and sim_bnb().

Usage

power(data, ..., alpha = 0.05, list_column = FALSE, ncores = 1L)

Arguments

data

(depower)
The simulated data returned by sim_log_lognormal(), sim_nb(), or sim_bnb().

...

(functions)
The function(s) used to perform the test. If empty, a default test function will be selected based on class(data). Names are used for labeling and should be unique. If names are empty, the call coerced to character will be used for name-value pairs. See 'Details'.

alpha

(numeric: 0.05; ⁠(0,1)⁠)
The expected probability of rejecting a single null hypothesis when it is actually true. See 'Details'.

list_column

(Scalar logical: FALSE)
If TRUE, the data and result list-columns are included in the returned data frame. If FALSE (default), the data and result list-columns are not included in the returned data frame.

ncores

(Scalar integer: 1L; ⁠[1,Inf)⁠)
The number of cores (number of worker processes) to use. Do not set greater than the value returned by parallel::detectCores(). May be helpful when the number of parameter combinations is large and nsims is large.

Details

Power is calculated as the proportion of hypothesis tests which result in a p-value less than or equal to alpha. e.g.

sum(p <= alpha, na.rm = TRUE) / nsims

Power is defined as the expected probability of rejecting the null hypothesis for a chosen value of the unknown effect. In a multiple comparisons scenario, power is defined as the marginal power, which is the expected power of the test for each individual null hypothesis assumed to be false.

Other forms of power under the multiple comparisons scenario include disjunctive or conjunctive power. Disjunctive power is defined as the expected probability of correctly rejecting one or more null hypotheses. Conjunctive power is defined as the expected probability of correctly rejecting all null hypotheses. In the simplest case, and where all hypotheses are independent, if the marginal power is defined as \pi and m is the number of null hypotheses assumed to be false, then disjunctive power may be calculated as 1 - (1 - \pi)^m and conjunctive power may be calculated as \pi^m. Disjunctive power tends to decrease with increasingly correlated hypotheses and conjunctive power tends to increase with increasingly correlated hypotheses.

Argument ...

... are the name-value pairs for the functions used to perform the tests. If not named, the functions coerced to character will be used for the name-value pairs. Typical in non-standard evaluation, ... accepts bare functions and converts them to a list of expressions. Each element in this list will be validated as a call and then evaluated on the simulated data. A base::call() is simply an unevaluated function. Below are some examples of specifying ... in power().

# Examples of specifying ... in power()
data <- sim_nb(
  n1 = 10,
  mean1 = 10,
  ratio = c(1.6, 2),
  dispersion1 = 2,
  dispersion2 = 2,
  nsims = 200
)

# ... is empty, so a an appropriate default function will be provided
power(data)

# This is equivalent to leaving ... empty
power(data, "NB Wald test" = wald_test_nb())

# If not named, "wald_test_nb()" will be used to label the function
power(data, wald_test_nb())

# You can specify any parameters in the call. The data argument
# will automatically be inserted or overwritten.
data |>
  power("NB Wald test" = wald_test_nb(equal_dispersion=TRUE, link="log"))

# Multiple functions may be used.
data |>
  power(
    wald_test_nb(link='log'),
    wald_test_nb(link='sqrt'),
    wald_test_nb(link='squared'),
    wald_test_nb(link='identity')
  )

# Just like functions in a pipe, the parentheses are required.
# This will error because wald_test_nb is missing parentheses.
try(power(data, wald_test_nb))

In most cases*, any user created test function may be utilized in ... if the following conditions are satisfied:

  1. The function contains argument data which is defined as a list with the first and second elements for simulated data.

  2. The return object is a list with element p for the p-value of the hypothesis test.

Validate with test cases beforehand.

*Simulated data of class log_lognormal_mixed_two_sample has both independent and dependent data. To ensure the appropriate test function is used, power.log_lognormal_mixed_two_sample() allows only t_test_welch() and t_test_paired() in .... Each will be evaluated on the simulated data according to column data$cor. If one or both of these functions are not included in ..., the corresponding default function will be used automatically. If any other test function is included, an error will be returned.

Argument alpha

\alpha is known as the type I assertion probability and is defined as the expected probability of rejecting a null hypothesis when it was actually true. \alpha is compared with the p-value and used as the decision boundary for rejecting or not rejecting the null hypothesis.

The family-wise error rate is the expected probability of making one or more type I assertions among a family of hypotheses. Using Bonferroni's method, \alpha is chosen for the family of hypotheses then divided by the number of tests performed (m). Each individual hypothesis is tested at \frac{\alpha}{m}. For example, if you plan to conduct 30 hypothesis tests and want to control the family-wise error rate to no greater than \alpha = 0.05, you would set alpha = 0.05/30.

The per-family error rate is the expected number of type I assertions among a family of hypotheses. If you calculate power for the scenario where you perform 1,000 hypotheses and want to control the per-family error rate to no greater than 10 type I assertions, you would choose alpha = 10/1000. This implicitly assumes all 1,000 hypotheses are truly null. Alternatively, if you assume 800 of these hypotheses are truly null and 200 are not, alpha = 10/1000 would control the per-family error rate to no greater than 8 type I assertions. If it is acceptable to keep the per-family error rate as 10, setting alpha = 10/800 would provide greater marginal power than the previous scenario.

These two methods assume that the distribution of p-values for the truly null hypotheses are uniform(0,1), but remain valid under various other testing scenarios (such as dependent tests). Other multiple comparison methods, such as FDR control, are common in practice but don't directly fit into this power simulation framework.

Column nsims

The final number of valid simulations per unique set of simulation parameters may be less than the original number requested. This may occur when the test results in a missing p-value. For wald_test_bnb(), pathological MLE estimates, generally from small sample sizes and very small dispersions, may result in a negative estimated standard deviation of the ratio. Thus the test statistic and p-value would not be calculated. Note that simulated data from sim_nb() and sim_bnb() may also reduce nsims during the data simulation phase.

The nsims column in the return data frame is the effective number of simulations for power results.

Value

A data frame with the following columns appended to the data object:

Column Name Description
alpha Type I assertion probability.
test Name-value pair of the function and statistical test: ⁠c(as.character(...) = names(...)⁠.
data List-column of simulated data.
result List-column of test results.
power Power of the test for corresponding parameters.

For power(list_column = FALSE), columns data, and result are excluded from the data frame.

References

\insertRef

yu_2017depower

\insertRef

yu_2020depower

\insertRef

rettiganti_2012depower

\insertRef

aban_2009depower

\insertRef

julious_2004depower

\insertRef

vickerstaff_2019depower

See Also

plot.depower()

Examples

#----------------------------------------------------------------------------
# power() examples
#----------------------------------------------------------------------------
library(depower)

# Power for independent two-sample t-Test
set.seed(1234)
data <- sim_log_lognormal(
  n1 = 20,
  n2 = 20,
  ratio = c(1.2, 1.4),
  cv1 = 0.4,
  cv2 = 0.4,
  cor = 0,
  nsims = 1000
)

# Welch's t-test is used by default
power(data)

# But you can specify anything else that is needed
power(
  data = data,
  "Welch's t-Test" = t_test_welch(alternative = "greater"),
  alpha = 0.01
)

# Power for dependent two-sample t-Test
set.seed(1234)
sim_log_lognormal(
  n1 = 20,
  n2 = 20,
  ratio = c(1.2, 1.4),
  cv1 = 0.4,
  cv2 = 0.4,
  cor = 0.5,
  nsims = 1000
) |>
  power()

# Power for mixed-type two-sample t-Test
set.seed(1234)
sim_log_lognormal(
  n1 = 20,
  n2 = 20,
  ratio = c(1.2, 1.4),
  cv1 = 0.4,
  cv2 = 0.4,
  cor = c(0, 0.5),
  nsims = 1000
) |>
  power()

# Power for one-sample t-Test
set.seed(1234)
sim_log_lognormal(
  n1 = 20,
  ratio = c(1.2, 1.4),
  cv1 = 0.4,
  nsims = 1000
) |>
  power()

# Power for independent two-sample NB test
set.seed(1234)
sim_nb(
  n1 = 10,
  mean1 = 10,
  ratio = c(1.6, 2),
  dispersion1 = 2,
  dispersion2 = 2,
  nsims = 200
) |>
  power()

# Power for BNB test
set.seed(1234)
sim_bnb(
  n = 10,
  mean1 = 10,
  ratio = c(1.4, 1.6),
  dispersion = 10,
  nsims = 200
) |>
  power()


depower documentation built on April 3, 2025, 9:23 p.m.