kappam_gold: Agreement of a group of nominal-scale raters with a gold...

View source: R/kappa.R

kappam_goldR Documentation

Agreement of a group of nominal-scale raters with a gold standard

Description

First, Cohen's kappa is calculated between each rater against the gold standard which is taken from the 1st column by default. The average of these kappas is returned as 'kappam_gold0'. The variant setting (⁠robust=⁠) is forwarded to Cohen's kappa. A bias-corrected version 'kappam_gold' and a corresponding confidence interval are provided as well via the jackknife method.

Usage

kappam_gold(
  ratings,
  refIdx = 1,
  robust = FALSE,
  ratingScale = NULL,
  conf.level = 0.95
)

Arguments

ratings

matrix. subjects by raters

refIdx

numeric. index of reference gold-standard raters. Currently, only a single gold-standard rater is supported. By default, it is the 1st rater.

robust

flag. Use robust estimate for random chance of agreement by Brennan-Prediger?

ratingScale

Possible levels for the rating. Or NULL.

conf.level

confidence level for confidence interval

Value

list. agreement measures (raw and bias-corrected) kappa with confidence interval. Entry raters refers to the number of tested raters, not counting the reference rater

Examples

# matrix with subjects in rows and raters in columns.
# 1st column is taken as gold-standard
m <- matrix(c("O", "G", "O",
              "G", "G", "R",
              "R", "R", "R",
              "G", "G", "O"), ncol = 3, byrow = TRUE)
kappam_gold(m)


kappaGold documentation built on April 4, 2025, 1:02 a.m.