# powerRM: Power analysis of tests of invariance of item parameters... In tcl: Testing in Conditional Likelihood Context

 powerRM R Documentation

## Power analysis of tests of invariance of item parameters between two groups of persons in binary Rasch model

### Description

Returns power of Wald (W), likelihood ratio (LR), Rao score (RS) and gradient (GR) test given probability of error of first kind \alpha, sample size, and a deviation from the hypothesis to be tested. The latter assumes equality of the item parameters in the Rasch model between two predetermined groups of persons. The alternative states that at least one of the parameters differs between the two groups.

### Usage

powerRM(
alpha = 0.05,
n_total,
persons1 = rnorm(10^6),
persons2 = rnorm(10^6),
local_dev
)


### Arguments

 alpha Probability of error of first kind. n_total Total sample size for which power shall be determined. persons1 A vector of person parameters in group 1 (drawn from a specified distribution). By default 10^6 parameters are drawn at random from the standard normal distribution. The larger this number the more accurate are the computations. See Details. persons2 A vector of person parameters in group 2 (drawn from a specified distribution). By default 10^6 parameters are drawn at random from the standard normal distribution. The larger this number the more accurate are the computations. See Details. local_dev A list of two vectors containing item parameters for the two person groups representing a deviation from the hypothesis to be tested locally per item.

### Details

In general, the power of the tests is determined from the assumption that the approximate distributions of the four test statistics are from the family of noncentral \chi^2 distributions with df equal to the number of items minus 1 and noncentrality parameter \lambda. The latter depends on a scenario of deviation from the hypothesis to be tested and a specified sample size. Given the probability of the error of the first kind \alpha the power of the tests can be determined from \lambda. More details about the distributions of the test statistics and the relationship between \lambda, power, and sample size can be found in Draxler and Alexandrowicz (2015).

As regards the concept of sample size a distinction between informative and total sample size has to be made since the power of the tests depends only on the informative sample size. In the conditional maximum likelihood context, the responses of persons with minimum or maximum person score are completely uninformative. They do not contribute to the value of the test statistic. Thus, the informative sample size does not include these persons. The total sample size is composed of all persons.

In particular, the determination of \lambda and the power of the tests, respectively, is based on a simple Monte Carlo approach. Data (responses of a large number of persons to a number of items) are generated given a user-specified scenario of a deviation from the hypothesis to be tested. A scenario of a deviation is given by a choice of the item parameters and the person parameters (to be drawn randomly from a specified distribution) for each of the two groups. Such a scenario may be called local deviation since deviations can be specified locally for each item. The relative group sizes are determined by the choice of the number of person parameters for each of the two groups. For instance, by default 10^6 person parameters are selected randomly for each group. In this case, it is implicitly assumed that the two groups of persons are of equal size. The user can specify the relative group sizes by choosing the length of the arguments persons1 and persons2 appropriately. Note that the relative group sizes do have an impact on power and sample size of the tests. The next step is to compute a test statistic T (Wald, LR, score, or gradient) from the simulated data. The observed value t of the test statistic is then divided by the informative sample size n_{infsim} observed in the simulated data. This yields the so-called global deviation e = t / n_{infsim}, i.e., the chosen scenario of a deviation from the hypothesis to be tested being represented by a single number. The power of the tests can be determined given a user-specified total sample size denoted by n_total. The noncentrality parameter \lambda can then be expressed by \lambda = n_{total}* (n_{infsim} / n_{totalsim}) * e, where n_{totalsim} denotes the total number of persons in the simulated data and n_{infsim} / n_{totalsim} is the proportion of informative persons in the sim. data. Let q_{\alpha} be the 1 - \alpha quantile of the central \chi^2 distribution with df equal to the number items minus 1. Then,

power = 1 - F_{df, \lambda} (q_{\alpha}),

where F_{df, \lambda} is the cumulative distribution function of the noncentral \chi^2 distribution with df equal to the number of items minus 1 and \lambda = n_{total} (n_{infsim} / n_{totalsim}) * e. Thereby, it is assumed that n_{total} is composed of a frequency distribution of person scores that is proportional to the observed distribution of person scores in the simulated data. The same holds true in respect of the relative group sizes, i.e., the relative frequencies of the two person groups in a sample of size n_{total} are assumed to be equal to the relative frequencies of the two groups in the simulated data.

Note that in this approach the data have to be generated only once. There are no replications needed. Thus, the procedure is computationally not very time-consuming.

Since e is determined from the value of the test statistic observed in the simulated data it has to be treated as a realized value of a random variable E. The same holds true for \lambda as well as the power of the tests. Thus, the power is a realized value of a random variable that shall be denoted by P. Consequently, the (realized) value of the power of the tests need not be equal to the exact power that follows from the user-specified n_{total}, \alpha, and the chosen item parameters used for the simulation of the data. If the CML estimates of these parameters computed from the simulated data are close to the predetermined parameters the power of the tests will be close to the exact value. This will generally be the case if the number of person parameters used for simulating the data is large, e.g., 10^5 or even 10^6 persons. In such cases, the possible random error of the computation procedure based on the sim. data may not be of practical relevance any more. That is why a large number (of persons for the simulation process) is generally recommended.

For theoretical reasons, the random error involved in computing the power of the tests can be pretty well approximated. A suitable approach is the well-known delta method. Basically, it is a Taylor polynomial of first order, i.e., a linear approximation of a function. According to it the variance of a function of a random variable can be linearly approximated by multiplying the variance of this random variable with the square of the first derivative of the respective function. In the present problem, the variance of the test statistic T is (approximately) given by the variance of a noncentral \chi^2 distribution with df equal to the number of free item parameters and noncentrality parameter \lambda. Thus, Var(T) = 2 (df + 2 \lambda), with \lambda = t. Since the global deviation e = (1 / n_{infsim}) * t it follows for the variance of the corresponding random variable E that Var(E) = (1 / n_{infsim})^2 * Var(T). The power of the tests is a function of e which is given by F_{df, \lambda} (q_{\alpha}), where \lambda = n_{total} * (n_{infsim} / n_{totalsim}) * e and df equal to the number of free item parameters. Then, by the delta method one obtains (for the variance of P).

Var(P) = Var(E) * (F'_{df, \lambda} (q_{\alpha}))^2,

where F'_{df, \lambda} is the derivative of F_{df, \lambda} with respect to e. This derivative is determined numerically and evaluated at e using the package numDeriv. The square root of Var(P) is then used to quantify the random error of the suggested Monte Carlo computation procedure. It is called Monte Carlo error of power.

### Value

A list of results.

 power Power value for each test. MC error of power Monte Carlo error of power computation for each test. global deviation Global deviation computed from simulated data for each test. See Details. local deviation CML estimates of item parameters in both groups of persons obtained from the simulated data expressing a deviation from the hypothesis to be tested locally per item. person score distribution in group 1 Relative frequencies of person scores in group 1 observed in simulated data. Uninformative scores, i.e., minimum and maximum score, are omitted. Note that the person score distribution does also have an influence on the power of the tests. person score distribution in group 2 Relative frequencies of person scores in group 2 observed in simulated data. Uninformative scores, i.e., minimum and maximum score, are omitted. Note that the person score distribution does also have an influence on the power of the tests. degrees of freedom Degrees of freedom df. noncentrality parameter Noncentrality parameter \lambda of \chi^2 distribution from which power is determined. call The matched call.

### References

Draxler, C. (2010). Sample Size Determination for Rasch Model Tests. Psychometrika, 75(4), 708–724.

Draxler, C., & Alexandrowicz, R. W. (2015). Sample Size Determination Within the Scope of Conditional Maximum Likelihood Estimation with Special Focus on Testing the Rasch Model. Psychometrika, 80(4), 897–919.

Draxler, C., Kurz, A., & Lemonte, A. J. (2020). The Gradient Test and its Finite Sample Size Properties in a Conditional Maximum Likelihood and Psychometric Modeling Context. Communications in Statistics-Simulation and Computation, 1-19.

Glas, C. A. W., & Verhelst, N. D. (1995a). Testing the Rasch Model. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch Models: Foundations, Recent Developments, and Applications (pp. 69–95). New York: Springer.

Glas, C. A. W., & Verhelst, N. D. (1995b). Tests of Fit for Polytomous Rasch Models. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch Models: Foundations, Recent Developments, and Applications (pp. 325-352). New York: Springer.

sa_sizeRM, and post_hocRM.

### Examples

## Not run:
# Numerical example

res <-  powerRM(n_total = 130, local_dev = list( c(0, -0.5, 0, 0.5, 1) , c(0, 0.5, 0, -0.5, 1)))

# > res
# $power # W LR RS GR # 0.824 0.840 0.835 0.845 # #$MC error of power
#     W    LR    RS    GR
# 0.002 0.002 0.002 0.002
#
# $global deviation # W LR RS GR # 0.118 0.122 0.121 0.124 # #$local deviation
#         Item2 Item3  Item4 Item5
# group1 -0.499 0.005  0.500 1.001
# group2  0.501 0.003 -0.499 1.003
#
# $person score distribution in group 1 # # 1 2 3 4 # 0.249 0.295 0.269 0.187 # #$person score distribution in group 2
#
#     1     2     3     4
# 0.249 0.295 0.270 0.186
#
# $degrees of freedom # [1] 4 # #$noncentrality parameter
#      W     LR     RS     GR
# 12.619 13.098 12.937 13.264
#
# \$call
# powerRM(n_total = 130, local_dev = list(c(0, -0.5, 0, 0.5, 1),
#                                         c(0, 0.5, 0, -0.5, 1)))

## End(Not run)


tcl documentation built on May 3, 2023, 1:17 a.m.