epiKappa: Computation of the Kappa Statistic for Agreement Between Two...

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/epiKappa.R

Description

Computes the Kappa Statistic for agreement between Two Raters, performs Hypothesis tests and calculates Confidence Intervals.

Usage

1
epiKappa(C, alpha=0.05, k0=0.4, digits=3)

Arguments

C

An nxn classification matrix or matrix of proportions.

k0

The Null hypothesis, kappa0 = k0

alpha

The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals

digits

Number of Digits to round calculations

Details

The Kappa statistic is used to measure agreement between two raters. For simplicity, consider the case where each rater can classify an object as Type I, or Type II. Then, the diagonal elements of a 2x2 matrix are the agreeing elements, that is where both raters classify an object as Type I or Type II. The discordant observations are on the off-diagonal. Note that the alternative hypothesis is always greater then, as we are interested in whether kappa exceeds a certain threshold, such as 0.4, for Fair agreement.

Value

kappa

The computation of the kappa statistic.

seh

The standard error computed under H0

seC

The standard error as computed for Confidence Intervals

CIL

Lower Confidence Limit for kappa

CIU

Upper Confidence Limit for kappa

Z

Hypothesis Test Statistic, kappa = K0 = K0 vs. kappa > K0

p.value

P-Value for hypothesis test

Data

Returns the original matrix of agreement.

k0

The Null hypothesis, kappa = k0

alpha

The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals

digits

Number of Digits to round calculations

Author(s)

Michael Rotondi, mrotondi@yorku.ca

References

Szklo M and Nieto FJ. Epidemiology: Beyond the Basics, Jones and Bartlett: Boston, 2007.

Fleiss J. Statistical Methods for Rates and Proportions, 2nd ed. New York: John Wiley and Sons; 1981.

See Also

sensSpec

Examples

1
2
X <- cbind(c(28,5), c(4,61));
summary(epiKappa(X, alpha=0.05, k0 = 0.6));

Example output

Kappa Analysis of Agreement 
 
                 Rater I: Type 1 Rater I: Type 2
Rater II: Type 1              28               4
Rater II: Type 2               5              61

Cohen's Kappa is:  0.793 
According to Fleiss (1981), the point estimate of kappa suggests excellent agreement.
 
95% Confidence Limits for the true Kappa Statistic are: [0.664, 0.921]
 
Z Test for H0: kappa = 0.6 vs. HA: kappa >= 0.6 is 2.108 with a p.value of 0.018
 
The associated standard error under H0 is: 0.091

epibasix documentation built on May 2, 2019, 10:08 a.m.

Related to epiKappa in epibasix...