# Light's Kappa for m raters

### Description

Computes Light's Kappa as an index of interrater agreement between m raters on categorical data.

### Usage

1 | ```
kappam.light(ratings)
``` |

### Arguments

`ratings` |
n*m matrix or dataframe, n subjects m raters. |

### Details

Missing data are omitted in a listwise way.

Light's Kappa equals the average of all possible combinations of bivariate Kappas between raters.

### Value

A list with class '"irrlist"' containing the following components:

`$method` |
a character string describing the method applied for the computation of interrater reliability. |

`$subjects` |
the number of subjects examined. |

`$raters` |
the number of raters. |

`$irr.name` |
a character string specifying the name of the coefficient. |

`$value` |
value of Kappa. |

`$stat.name` |
a character string specifying the name of the corresponding test statistic. |

`$statistic` |
the value of the test statistic. |

`$p.value` |
the p-value for the test. |

### Author(s)

Matthias Gamer

### References

Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.

Light, R.J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.

### See Also

`kappa2`

,
`kappam.fleiss`

### Examples

1 2 | ```
data(diagnoses)
kappam.light(diagnoses) # Light's Kappa
``` |