kappam.light: Light's Kappa for m raters

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

Computes Light's Kappa as an index of interrater agreement between m raters on categorical data.

Usage

1
kappam.light(ratings)

Arguments

ratings

n*m matrix or dataframe, n subjects m raters.

Details

Missing data are omitted in a listwise way.
Light's Kappa equals the average of all possible combinations of bivariate Kappas between raters.

Value

A list with class '"irrlist"' containing the following components:

$method

a character string describing the method applied for the computation of interrater reliability.

$subjects

the number of subjects examined.

$raters

the number of raters.

$irr.name

a character string specifying the name of the coefficient.

$value

value of Kappa.

$stat.name

a character string specifying the name of the corresponding test statistic.

$statistic

the value of the test statistic.

$p.value

the p-value for the test.

Author(s)

Matthias Gamer

References

Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.

Light, R.J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.

See Also

kappa2, kappam.fleiss

Examples

1
2

Example output

Loading required package: lpSolve
 Light's Kappa for m Raters

 Subjects = 30 
   Raters = 6 
    Kappa = 0.459 

        z = 2.31 
  p-value = 0.0211 

irr documentation built on May 2, 2019, 8:50 a.m.

Related to kappam.light in irr...