ci.kappa: Confidence interval for two kappa reliability coefficients

View source: R/statpsych3.R

ci.kappaR Documentation

Confidence interval for two kappa reliability coefficients

Description

Computes confidence intervals for the intraclass kappa coefficient and Cohen's kappa coefficient with two dichotomous ratings. The G-index of agreement (see ci.agree) is arguably a better measure of agreement.

For more details, see Section 3.5 of Bonett (2021, Volume 3)

Usage

ci.kappa(alpha, f00, f01, f10, f11)

Arguments

alpha

alpha level for 1-alpha confidence

f00

number of objects rated 0 by both Rater 1 and Rater 2

f01

number of objects rated 0 by Rater 1 and 1 by Rater 2

f10

number of objects rated 1 by Rater 1 and 0 by Rater 2

f11

number of objects rated 1 by both Rater 1 and Rater 2

Value

Returns a 2-row matrix. The results in row 1 are for the intraclass kappa. The results in row 2 are for Cohen's kappa. The columns are:

  • Estimate - estimate of interrater reliability

  • SE - standard error

  • LL - lower limit of the confidence interval

  • UL - upper limit of the confidence interval

References

\insertRef

Fleiss2003statpsych

\insertRef

Bonett2021statpsych

Examples

ci.kappa(.05, 31, 12, 4, 58)

# Should return:
#              Estimate     SE    LL    UL
# IC kappa:       0.674 0.0748 0.527 0.821
# Cohen kappa:    0.676 0.0734 0.532 0.820



statpsych documentation built on Jan. 13, 2026, 1:07 a.m.