adjust_power: Adjusting power to assure actual size is within significance...

Description Usage Arguments Value References Examples

View source: R/functions.R

Description

It is common to use Monte Carlo experiments to evaluate the performance of hypothesis tests and compare the empirical power among competing tests. High power is desirable but difficulty arises when the actual sizes of competing tests are not comparable. A possible way of tackling this issue is to adjust the empirical power according to the actual size. This function incorporates three types of power adjustment methods.

Usage

1
adjust_power(size, power, method = "ZW")

Arguments

size

the empirical size of a test.

power

the empirical power of a test.

method

the power adjustment method. 'ZW' is the method proposed by Zhang and Wang (2020), 'CYS' is the method proposed by Cavus et al. (2019), and 'probit' is the "method 1: probit analysis" in Lloyd (2005).

Value

the power value after adjustment.

References

Lloyd, C. J. (2005). Estimating test power adjusted for size. Journal of Statistical Computation and Simulation, 75(11):921-933.

Cavus, M., Yazici, B., & Sezer, A. (2019). Penalized power approach to compare the power of the tests when Type I error probabilities are different. Communications in Statistics-Simulation and Computation, 1-15.

Zhang, H. and Wang, H. (2020). Transformation tests and their asymptotic power in two-sample comparisons Manuscript in review.

Examples

1
2
3
adjust_power(size = 0.06, power = 0.8, method = 'ZW')
adjust_power(size = 0.06, power = 0.8, method = 'CYS')
adjust_power(size = 0.06, power = 0.8, method = 'probit')

tcftt documentation built on July 23, 2020, 5:08 p.m.