Description Usage Arguments Details Value Author(s) References See Also Examples
Computes the Kappa Statistic for agreement between Two Raters, performs Hypothesis tests and calculates Confidence Intervals.
1 |
C |
An nxn classification matrix or matrix of proportions. |
k0 |
The Null hypothesis, kappa0 = k0 |
alpha |
The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals |
digits |
Number of Digits to round calculations |
The Kappa statistic is used to measure agreement between two raters. For simplicity, consider the case where each rater can classify an object as Type I, or Type II. Then, the diagonal elements of a 2x2 matrix are the agreeing elements, that is where both raters classify an object as Type I or Type II. The discordant observations are on the off-diagonal. Note that the alternative hypothesis is always greater then, as we are interested in whether kappa exceeds a certain threshold, such as 0.4, for Fair agreement.
kappa |
The computation of the kappa statistic. |
seh |
The standard error computed under H0 |
seC |
The standard error as computed for Confidence Intervals |
CIL |
Lower Confidence Limit for kappa |
CIU |
Upper Confidence Limit for kappa |
Z |
Hypothesis Test Statistic, kappa = K0 = K0 vs. kappa > K0 |
p.value |
P-Value for hypothesis test |
Data |
Returns the original matrix of agreement. |
k0 |
The Null hypothesis, kappa = k0 |
alpha |
The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals |
digits |
Number of Digits to round calculations |
Michael Rotondi, mrotondi@yorku.ca
Szklo M and Nieto FJ. Epidemiology: Beyond the Basics, Jones and Bartlett: Boston, 2007.
Fleiss J. Statistical Methods for Rates and Proportions, 2nd ed. New York: John Wiley and Sons; 1981.
1 2 |
Kappa Analysis of Agreement
Rater I: Type 1 Rater I: Type 2
Rater II: Type 1 28 4
Rater II: Type 2 5 61
Cohen's Kappa is: 0.793
According to Fleiss (1981), the point estimate of kappa suggests excellent agreement.
95% Confidence Limits for the true Kappa Statistic are: [0.664, 0.921]
Z Test for H0: kappa = 0.6 vs. HA: kappa >= 0.6 is 2.108 with a p.value of 0.018
The associated standard error under H0 is: 0.091
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.