Computes Light's Kappa as an index of interrater agreement between m raters on categorical data.

1 | ```
kappam.light(ratings)
``` |

`ratings` |
n*m matrix or dataframe, n subjects m raters. |

Missing data are omitted in a listwise way.

Light's Kappa equals the average of all possible combinations of bivariate Kappas between raters.

A list with class '"irrlist"' containing the following components:

`$method` |
a character string describing the method applied for the computation of interrater reliability. |

`$subjects` |
the number of subjects examined. |

`$raters` |
the number of raters. |

`$irr.name` |
a character string specifying the name of the coefficient. |

`$value` |
value of Kappa. |

`$stat.name` |
a character string specifying the name of the corresponding test statistic. |

`$statistic` |
the value of the test statistic. |

`$p.value` |
the p-value for the test. |

Matthias Gamer

Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.

Light, R.J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.

1 2 | ```
data(diagnoses)
kappam.light(diagnoses) # Light's Kappa
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.