Description Usage Arguments Author(s) References See Also Examples
Compute inter-rater or intra-rater agreement. Kappa, a common agreement statisitcs, assumes that agreement is at random and it's index express the agreement beyond the one observed at random. Gwet's AC1 statistic assumes that are agreement between observers are not totally at random such as Kappa. There will be some cases easy to agree in the condition absence, and some cases easy to agree in the condition presence and some that will be difficult to agree. Gwet also discuss that AC1 is way less prompt to a paradox that occurs with Kappa with exteme prevalences of the condition, where low Kappa are observed even with high crude agreement. TIn simulations studies, Gwet estimated bias for this statistic, therefore, for apropriate interpretation of AC1, it is necessary to subtract AC1 from a critical value on table 6.16 on Gwet's book, and then use Fleis table as bencmarking. See http://agreestat.com/.
1 |
tab |
k x k table which represents |
conflev |
Confidence Level associated with the confidence interval (0.95 is the default value). |
N |
population size which will be stick in standard error correction, N = Inf is no correction. |
print |
Logical. Should results be printed on the console? |
Marcel Quintana and Pedro Brasil. Special thanks to Mr. Gwet for reviewing the code.
Gwet. Computing inter-rater reliability and its variance in the presence of high agreement. The British Journal of Mathematical and Statistical Psychology. 61. 29-48. May 2008.
package irr for many agreement statistics. epiR::epi.kappa for Kappa with confidence intervals. Also take a look at epiDisplay::kap for multi-rater multi-reader Kappa.
1 2 3 4 5 6 7 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.