# N.cohen.kappa: Sample Size Calculation for Cohen's Kappa Statistic In irr: Various Coefficients of Interrater Reliability and Agreement

## Description

This function is a sample size estimator for the Cohen's Kappa statistic for a binary outcome. Note that any value of "kappa under null" in the interval [0,1] is acceptable (i.e. k0=0 is a valid null hypothesis).

## Usage

 ```1 2``` ``` N.cohen.kappa(rate1, rate2, k1, k0, alpha=0.05, power=0.8, twosided=FALSE) ```

## Arguments

 `rate1` the probability that the first rater will record a positive diagnosis `rate2` the probability that the second rater will record a positive diagnosis `k1` the true Cohen's Kappa statistic `k0` the value of kappa under the null hypothesis `alpha` type I error of test `power` the desired power to detect the difference between true kappa and hypothetical kappa `twosided` TRUE if test is two-sided

## Value

returns required sample size

Ian Fellows

## References

Cantor, A. B. (1996) Sample-size calculation for Cohen's kappa. Psychological Methods, 1, 150-153.

`kappa2`

## Examples

 ```1 2 3``` ``` # Testing H0: kappa = 0.7 vs. HA: kappa > 0.7 given that # kappa = 0.85 and both raters classify 50% of subjects as positive. N.cohen.kappa(0.5, 0.5, 0.7, 0.85) ```

### Example output

```Loading required package: lpSolve
[1] 96
```

irr documentation built on May 30, 2017, 3:13 a.m.