knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
The package also includes tools to assess agreement between raters and to plan reliability studies.
df_agr <- read.csv(system.file("extdata", "agreement_example.csv", package = "meddecide")) head(df_agr)
Use the agreement()
function to compute Cohen's or Fleiss' Kappa
statistics depending on the number of raters.
agr_res <- agreement(data = df_agr) agr_res$kappa
The functions kappaSizeCI()
, kappaSizeFixedN()
and
kappaSizePower()
help determine the sample size needed for agreement
research.
# precision based approach kappaSizeCI(kappa0 = 0.7, conf.level = 0.95, w = 0.1) # fixed number of raters kappaSizeFixedN(kappa0 = 0.7, n = 60) # power based approach kappaSizePower(kappa0 = 0.7, kappa1 = 0.8, power = 0.8)
These calculations support planning of reliability experiments in clinical research.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.