cat_specific: Calculate Category-Specific Agreement

View source: R/categorical_specific_agreement.R

cat_specificR Documentation

Calculate Category-Specific Agreement

Description

Specific agreement is an index of the reliability of categorical measurements. It describes the amount of agreement observed with regard to each possible category. With two raters, the interpretation of specific agreement for any category is the probability of one rater assigning an item to that category given that the other rater has also assigned that item to that category. With more than two raters, the interpretation becomes the probability of a randomly chosen rater assigning an item to that category given that another randomly chosen rater has also assigned that item to that category. When applied to binary (i.e., dichotomous) data, specific agreement on the positive category is often referred to as positive agreement (PA) and specific agreement on the negative category is often referred to as negative agreement (NA). In this case, PA is equal to the F1 score frequently used in computer science.

Usage

cat_specific(
  .data,
  object = Object,
  rater = Rater,
  score = Score,
  categories = NULL,
  bootstrap = 2000,
  warnings = TRUE
)

Arguments

.data

Required. A matrix or data frame in tall format containing categorical data where each row corresponds to a single score (i.e., assignment of an object to a category) Cells should contain numbers or characters indicating the discrete category that the corresponding rater assigned the corresponding object to. Cells should contain NA if a particular assignment is missing (e.g., that object was not assigned to a category by that rater).

object

Optional. The name of the variable in .data identifying the object of measurement for each observation, in non-standard evaluation without quotation marks. (default = Object)

rater

Optional. The name of the variable in .data identifying the rater or source of measurement for each observation, in non-standard evaluation without quotation marks. (default = Rater)

score

Optional. The name of the variable in .data containing the categorical score or rating/code for each observation, in non-standard evaluation without quotation marks. (default = Score)

categories

Optional. A vector (numeric, character, or factor) containing all possible categories that objects could have been assigned to. When this argument is omitted or set to NULL, the possible categories are assumed to be those observed in .data. However, in the event that not all possible categories are observed in .data, this assumption may be misleading and so the possible categories, and their ordering, can be explicitly specified. (default = NULL)

bootstrap

Optional. A single non-negative integer that specifies how many bootstrap resamplings should be computed (used primarily for estimating confidence intervals and visualizing uncertainty). To skip bootstrapping, set this argument to 0. (default = 2000)

warnings

Optional. A single logical value that specifies whether warnings should be displayed. (default = TRUE).

Value

An object of type 'spa' containing the results and details.

observed

A named numeric vector containing the observed specific agreement for each category

boot_results

A list containing the results of the bootstrapping procedure

details

A list containing the details of the analysis, such as the formatted codes, relevant counts, weighting scheme and weight matrix.

call

The function call that created these results.

References

Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302. https://doi.org/10/dsb8pd

Fleiss, J. L. (1975). Measuring agreement between two judges on the presence or absence of a trait. Biometrics, 31(3), 651-659. https://doi.org/10/fxdb8x

Uebersax, J. S. (1982). A design-independent method for measuring the reliability of psychiatric diagnosis. Journal of Psychiatric Research, 17(4), 335-342. https://doi.org/10/fbbdfn

Cicchetti, D. V., & Feinstein, A. R. (1990). High agreement but low kappa: II. Resolving the paradoxes. Journal of Clinical Epidemiology, 43(6), 551-558. https://doi.org/10/czkxkb

See Also

Other functions for categorical data: cat_adjusted()


jmgirard/agreement documentation built on Sept. 12, 2022, 12:39 a.m.