Description Usage Arguments Details Author(s) References
View source: R/gwet_agree.coeff3.dist.r
Computes Krippendorff's alpha coefficient and standard error for multiple
raters when data is an n x q
matrix representing the distribution of
raters be subject and by category.
1 2 | krippen.alpha.dist(ratings, weights = "unweighted", conflev = 0.95,
N = Inf, print = TRUE)
|
ratings |
an |
weights |
optional weighting |
conflev |
confidence level |
N |
used as denominator in finite population correction |
print |
logical; if |
A typical entry associated with a subject and a category, represents the number of raters who classified the subject into the specified category. Excludes all subjects that are not rated by any rater.
The algorithm used to compute Krippendorff's alpha is very different from anything that was published on this topic. Instead, it follows the equations presented by K. Gwet (2010).
Kilem L. Gwet
Gwet, K. (2012). Handbook of Inter-Rater Reliability: the Definitive Guide to Measuring the Extent of Agreement among Multiple Raters, 3rd Edition. Advanced Analytics, LLC; 3rd edition (March 2, 2012).
Krippendorff (1970). "Bivariate agreement coefficients for reliability of data." Sociological Methodology, 2, 139-150.
Krippendorff (1980). Content analysis: An introduction to its methodology (2nd ed.), New-bury Park, CA: Sage.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.