Description Usage Arguments Value References Examples

Slightly adjusted function from `kappa2`

to calculate Cohen's Kappa (Cohen, 1960).

1 2 |

`ratings` |
A dataframe or matrix of N x 2 with N the number of observations. The columns contain the ratings of the 2 raters. |

`weight` |
The weight matrix that has to be used to calculate the reliability. Default is |

`sort.levels` |
Sort if the levels are numeric. |

`method` |
Method that was used to calculate reliability and weights used |

`subjects` |
Number of subjects in the dataframe |

`nraters` |
Number of raters |

`irr.name` |
Type of reliability measure |

`value` |
Value for Cohen's Kappa |

`StdErr` |
The standard error of the estimated Kappa value |

`stat.name` |
The corresponding test statistic |

`statistic` |
The value of the test statistic |

`p.value` |
The p-value for the test |

`Po` |
Overall proportion of agreement |

`Pe` |
Proportion of agreement expected by chance |

`ratings` |
The original dataframe with the ratings |

Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. *Educational and Psychological
Measurement*, Vol.20(1), pp.37-46

Cohen, J. (1968).Weighted kappa: Nominal scale agreement provision for scaled
disagreement or partial credit. *Psychological Bulletin*, Vol.70(4), pp.213-220

1 2 3 4 5 6 7 8 9 10 | ```
# Load data
data(PsychMorbid)
Df = PsychMorbid[,1:2]
# Unweighted kappa
CohenK(Df)
# Weighted kappa
data(Agreement_deVet)
CohenK(Agreement_deVet[,2:3], weight = "squared")
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.