Description Usage Arguments Details Value Author(s) References See Also Examples
Computes Light's Kappa as an index of interrater agreement between m raters on categorical data.
1 | kappam.light(ratings)
|
ratings |
n*m matrix or dataframe, n subjects m raters. |
Missing data are omitted in a listwise way.
Light's Kappa equals the average of all possible combinations of bivariate Kappas between raters.
A list with class '"irrlist"' containing the following components:
$method |
a character string describing the method applied for the computation of interrater reliability. |
$subjects |
the number of subjects examined. |
$raters |
the number of raters. |
$irr.name |
a character string specifying the name of the coefficient. |
$value |
value of Kappa. |
$stat.name |
a character string specifying the name of the corresponding test statistic. |
$statistic |
the value of the test statistic. |
$p.value |
the p-value for the test. |
Matthias Gamer
Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.
Light, R.J. (1971). Measures of response agreement for qualitative data: Some generalizations and alternatives. Psychological Bulletin, 76, 365-377.
1 2 | data(diagnoses)
kappam.light(diagnoses) # Light's Kappa
|
Loading required package: lpSolve
Light's Kappa for m Raters
Subjects = 30
Raters = 6
Kappa = 0.459
z = 2.31
p-value = 0.0211
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.