StartKappa: A graphical user interface for calculating Cohen's and...

Description Usage Details Value Author(s) References See Also

View source: R/StartKappa.R

Description

Launches the R-Shiny application. The user can retrieve inter-rater agreement scores from a file (.CSV or .TXT) loaded directly through the graphical interface.

Usage

1

Details

Data importation is done directly through the graphical user interface. Only CSV and TXT files are accepted.

If there are p variables observed by k raters on n individuals, the input file should be a data frame with n rows and (k x p) columns. The first k columns represent the scores attributed by the k raters for the first variable; the next k columns represent the scores attributed by the k raters for the second variable; etc. Cohen's or Fleiss' kappas are returned for each variable.

The data file must contains a header, and the columns must be labeled as follows: ‘VariableName_X’, where X is a unique character (letter or number) associated with each rater. An example of correct data file with two raters is given here: http://www.pacea.u-bordeaux.fr/IMG/csv/data_Kappa_Cohen.csv.

Kappa values are calculated using the functions kappa2 and kappam.fleiss from the package ‘irr’. Please check their help pages for more technical details, in particular about the weighting options for Cohen's kappa. For ordered factors, linear or quadratic weighting could be a good choice, as they give more importance to strong disgreements. If linear or quadratic weighting are chosen, the levels of the factors will be supposed to be ordered alphabetically (as a consequence, a factor with three levels "Low", "Medium" and "High" would be ordered in an inconvenient way: in this case, please recode the levels with names matching the natural order of the levels).

Value

The function returns no value, but the table of results can be downloaded as a CSV file through the user interface.

Author(s)

Frédéric Santos, [email protected]

References

Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cohen, J. (1968) Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.

See Also

irr::kappa2, irr::kappam.fleiss


KappaGUI documentation built on March 22, 2018, 5:05 p.m.