kappam.fleiss: Fleiss' Kappa for m raters

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

Computes Fleiss' Kappa as an index of interrater agreement between m raters on categorical data. Additionally, category-wise Kappas could be computed.

Usage

1
kappam.fleiss(ratings, exact = FALSE, detail = FALSE)

Arguments

ratings

n*m matrix or dataframe, n subjects m raters.

exact

a logical indicating whether the exact Kappa (Conger, 1980) or the Kappa described by Fleiss (1971) should be computed.

detail

a logical indicating whether category-wise Kappas should be computed

Details

Missing data are omitted in a listwise way.
The coefficient described by Fleiss (1971) does not reduce to Cohen's Kappa (unweighted) for m=2 raters. Therefore, the exact Kappa coefficient, which is slightly higher in most cases, was proposed by Conger (1980).
The null hypothesis Kappa=0 could only be tested using Fleiss' formulation of Kappa.

Value

A list with class '"irrlist"' containing the following components:

$method

a character string describing the method applied for the computation of interrater reliability.

$subjects

the number of subjects examined.

$raters

the number of raters.

$irr.name

a character string specifying the name of the coefficient.

$value

value of Kappa.

$stat.name

a character string specifying the name of the corresponding test statistic.

$statistic

the value of the test statistic.

$p.value

the p-value for the test.

$detail

a table with category-wise kappas and the corresponding test statistics.

Author(s)

Matthias Gamer

References

Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.

Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382.

Fleiss, J.L., Levin, B., & Paik, M.C. (2003). Statistical Methods for Rates and Proportions, 3rd Edition. New York: John Wiley & Sons.

See Also

kappa2, kappam.light

Examples

1
2
3
4
5
6
data(diagnoses)
kappam.fleiss(diagnoses)               # Fleiss' Kappa
kappam.fleiss(diagnoses, exact=TRUE)   # Exact Kappa
kappam.fleiss(diagnoses, detail=TRUE)  # Fleiss' and category-wise Kappa

kappam.fleiss(diagnoses[,1:4])         # Fleiss' Kappa of raters 1 to 4

Example output

Loading required package: lpSolve
 Fleiss' Kappa for m Raters

 Subjects = 30 
   Raters = 6 
    Kappa = 0.43 

        z = 17.7 
  p-value = 0 
 Fleiss' Kappa for m Raters (exact value)

 Subjects = 30 
   Raters = 6 
    Kappa = 0.442 
 Fleiss' Kappa for m Raters

 Subjects = 30 
   Raters = 6 
    Kappa = 0.43 

        z = 17.7 
  p-value = 0 

                         Kappa      z p.value
1. Depression            0.245  5.192   0.000
2. Personality Disorder  0.245  5.192   0.000
3. Schizophrenia         0.520 11.031   0.000
4. Neurosis              0.471  9.994   0.000
5. Other                 0.566 12.009   0.000
 Fleiss' Kappa for m Raters

 Subjects = 30 
   Raters = 4 
    Kappa = 0.489 

        z = 13 
  p-value = 0 

irr documentation built on May 2, 2019, 8:50 a.m.

Related to kappam.fleiss in irr...