powerPCM: Power analysis of tests of invariance of item parameters...

View source: R/powerPCM.R

powerPCMR Documentation

Power analysis of tests of invariance of item parameters between two groups of persons in partial credit model

Description

Returns power of Wald (W), likelihood ratio (LR), Rao score (RS) and gradient (GR) test given probability of error of first kind \alpha, sample size, and a deviation from the hypothesis to be tested. The hypothesis to be tested assumes equal item-category parameters of the partial credit model between two predetermined groups of persons. The alternative states that at least one of the parameters differs between the two groups.

Usage

powerPCM(
  alpha = 0.05,
  n_total,
  persons1 = rnorm(10^6),
  persons2 = rnorm(10^6),
  local_dev
)

Arguments

alpha

Probability of error of first kind.

n_total

Total sample size for which power shall be determined.

persons1

A vector of person parameters in group 1 (drawn from a specified distribution). By default 10^6 parameters are drawn at random from the standard normal distribution. The larger this number the more accurate are the computations. See Details.

persons2

A vector of person parameters in group 2 (drawn from a specified distribution). By default 10^6 parameters are drawn at random from the standard normal distribution. The larger this number the more accurate are the computations. See Details.

local_dev

A list consisting of two lists. One list refers to group 1, the other to group 2. Each of the two lists contains a numeric vector per item, i.e., each list contains as many vectors as items. Each vector contains the free item-cat. parameters of the respective item. The number of free item-cat. parameters per item equals the number of categories of the item minus 1.

Details

In general, the power of the tests is determined from the assumption that the approximate distributions of the four test statistics are from the family of noncentral \chi^2 distributions with df equal to the number of free item-category parameters and noncentrality parameter \lambda. The latter depends on a scenario of deviation from the hypothesis to be tested and a specified sample size. Given the probability of the error of the first kind \alpha the power of the tests can be determined from \lambda. More details about the distributions of the test statistics and the relationship between \lambda, power, and sample size can be found in Draxler and Alexandrowicz (2015).

As regards the concept of sample size a distinction between informative and total sample size has to be made since the power of the tests depends only on the informative sample size. In the conditional maximum likelihood context, the responses of persons with minimum or maximum person score are completely uninformative. They do not contribute to the value of the test statistic. Thus, the informative sample size does not include these persons. The total sample size is composed of all persons.

In particular, the determination of \lambda and the power of the tests, respectively, is based on a simple Monte Carlo approach. Data (responses of a large number of persons to a number of items) are generated given a user-specified scenario of a deviation from the hypothesis to be tested. A scenario of a deviation is given by a choice of the item-cat. parameters and the person parameters (to be drawn randomly from a specified distribution) for each of the two groups. Such a scenario may be called local deviation since deviations can be specified locally for each item-category. The relative group sizes are determined by the choice of the number of person parameters for each of the two groups. For instance, by default 10^6 person parameters are selected randomly for each group. In this case, it is implicitly assumed that the two groups of persons are of equal size. The user can specify the relative group sizes by choosing the length of the arguments persons1 and persons2 appropriately. Note that the relative group sizes do have an impact on power and sample size of the tests. The next step is to compute a test statistic T (Wald, LR, score, or gradient) from the simulated data. The observed value t of the test statistic is then divided by the informative sample size n_{infsim} observed in the simulated data. This yields the so-called global deviation e = t / n_{infsim}, i.e., the chosen scenario of a deviation from the hypothesis to be tested being represented by a single number. The power of the tests can be determined given a user-specified total sample size denoted by n_total. The noncentrality parameter \lambda can then be expressed by \lambda = n_{total}* (n_{infsim} / n_{totalsim}) * e, where n_{totalsim} denotes the total number of persons in the simulated data and n_{infsim} / n_{totalsim} is the proportion of informative persons in the sim. data. Let q_{\alpha} be the 1 - \alpha quantile of the central \chi^2 distribution with df equal to the number of free item-category parameters. Then,

power = 1 - F_{df, \lambda} (q_{\alpha}),

where F_{df, \lambda} is the cumulative distribution function of the noncentral \chi^2 distribution with df equal to the number of free item-category parameters and \lambda = n_{total} (n_{infsim} / n_{totalsim}) * e. Thereby, it is assumed that n_{total} is composed of a frequency distribution of person scores that is proportional to the observed distribution of person scores in the simulated data. The same holds true in respect of the relative group sizes, i.e., the relative frequencies of the two person groups in a sample of size n_{total} are assumed to be equal to the relative frequencies of the two groups in the simulated data.

Note that in this approach the data have to be generated only once. There are no replications needed. Thus, the procedure is computationally not very time-consuming.

Since e is determined from the value of the test statistic observed in the simulated data it has to be treated as a realized value of a random variable E. The same holds true for \lambda as well as the power of the tests. Thus, the power is a realized value of a random variable that shall be denoted by P. Consequently, the (realized) value of the power of the tests need not be equal to the exact power that follows from the user-specified n_{total}, \alpha, and the chosen item-category parameters used for the simulation of the data. If the CML estimates of these parameters computed from the simulated data are close to the predetermined parameters the power of the tests will be close to the exact value. This will generally be the case if the number of person parameters used for simulating the data is large, e.g., 10^5 or even 10^6 persons. In such cases, the possible random error of the computation procedure based on the sim. data may not be of practical relevance any more. That is why a large number (of persons for the simulation process) is generally recommended.

For theoretical reasons, the random error involved in computing the power of the tests can be pretty well approximated. A suitable approach is the well-known delta method. Basically, it is a Taylor polynomial of first order, i.e., a linear approximation of a function. According to it the variance of a function of a random variable can be linearly approximated by multiplying the variance of this random variable with the square of the first derivative of the respective function. In the present problem, the variance of the test statistic T is (approximately) given by the variance of a noncentral \chi^2 distribution with df equal to the number of free item-category parameters and noncentrality parameter \lambda. Thus, Var(T) = 2 (df + 2 \lambda), with \lambda = t. Since the global deviation e = (1 / n_{infsim}) * t it follows for the variance of the corresponding random variable E that Var(E) = (1 / n_{infsim})^2 * Var(T). The power of the tests is a function of e which is given by F_{df, \lambda} (q_{\alpha}), where \lambda = n_{total} * (n_{infsim} / n_{totalsim}) * e and df equal to the number of free item-category parameters. Then, by the delta method one obtains (for the variance of P).

Var(P) = Var(E) * (F'_{df, \lambda} (q_{\alpha}))^2,

where F'_{df, \lambda} is the derivative of F_{df, \lambda} with respect to e. This derivative is determined numerically and evaluated at e using the package numDeriv. The square root of Var(P) is then used to quantify the random error of the suggested Monte Carlo computation procedure. It is called Monte Carlo error of power.

Value

A list of results.

power

Power value for each test.

MC error of power

Monte Carlo error of power computation for each test.

global deviation

Global deviation computed from simulated data for each test. See Details.

local deviation

CML estimates of free item-category parameters in both groups of persons obtained from the simulated data expressing a deviation from the hypothesis to be tested locally per item and response category.

person score distribution in group 1

Relative frequencies of person scores in group 1 observed in simulated data. Uninformative scores, i.e., minimum and maximum score, are omitted. Note that the person score distribution does also have an influence on the power of the tests.

person score distribution in group 2

Relative frequencies of person scores in group 2 observed in simulated data. Uninformative scores, i.e., minimum and maximum score, are omitted. Note that the person score distribution does also have an influence on the power of the tests.

degrees of freedom

Degrees of freedom df.

noncentrality parameter

Noncentrality parameter \lambda of \chi^2 distribution from which power is determined.

call

The matched call.

References

Draxler, C. (2010). Sample Size Determination for Rasch Model Tests. Psychometrika, 75(4), 708–724.

Draxler, C., & Alexandrowicz, R. W. (2015). Sample Size Determination Within the Scope of Conditional Maximum Likelihood Estimation with Special Focus on Testing the Rasch Model. Psychometrika, 80(4), 897–919.

Draxler, C., Kurz, A., & Lemonte, A. J. (2020). The Gradient Test and its Finite Sample Size Properties in a Conditional Maximum Likelihood and Psychometric Modeling Context. Communications in Statistics-Simulation and Computation, 1-19.

Glas, C. A. W., & Verhelst, N. D. (1995a). Testing the Rasch Model. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch Models: Foundations, Recent Developments, and Applications (pp. 69–95). New York: Springer.

Glas, C. A. W., & Verhelst, N. D. (1995b). Tests of Fit for Polytomous Rasch Models. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch Models: Foundations, Recent Developments, and Applications (pp. 325-352). New York: Springer.

See Also

sa_sizePCM, and post_hocPCM.

Examples

## Not run: 
# Numerical example

# free item-category parameters for group 1 and 2  with 5 items, with 3 categories each
local_dev <-  list (  list(c( 0, 0), c( -1, 0), c( 0, 0),  c( 1, 0), c( 1, 0.5)) ,
                      list(c( 0, 0), c( -1, 0), c( 0, 0),  c( 1, 0), c( 0, -0.5))  )

res <-  powerPCM(alpha = 0.05, n_total = 200, persons1 = rnorm(10^6),
                  persons2 = rnorm(10^6), local_dev = local_dev)

# > res
# $power
#     W    LR    RS    GR
# 0.863 0.885 0.876 0.892
#
# $`MC error of power`
#     W    LR    RS    GR
# 0.002 0.002 0.002 0.002
#
# $`global deviation`
#     W    LR    RS    GR
# 0.102 0.107 0.105 0.109
#
# $`local deviation`
#         I1-C2  I2-C1  I2-C2  I3-C1  I3-C2 I4-C1 I4-C2  I5-C1  I5-C2
# group1  0.002 -0.997 -0.993  0.006  0.012 1.002 1.007  1.006  1.508
# group2 -0.007 -1.005 -1.007 -0.006 -0.009 0.993 0.984 -0.006 -0.510
#
# $`person score distribution in group 1`
#
#     1     2     3     4     5     6     7     8     9
# 0.112 0.130 0.131 0.129 0.122 0.114 0.101 0.091 0.070
#
# $`person score distribution in group 2`
#
#     1     2     3     4     5     6     7     8     9
# 0.091 0.108 0.117 0.122 0.122 0.121 0.115 0.110 0.093
#
# $`degrees of freedom`
# [1] 9
#
# $`noncentrality parameter`
#      W     LR     RS     GR
# 18.003 19.024 18.596 19.403
#
# $call
# powerPCM(alpha = 0.05, n_total = 200, persons1 = rnorm(10^6),
#          persons2 = rnorm(10^6), local_dev = local_dev)

## End(Not run)

tcl documentation built on May 3, 2023, 1:17 a.m.