r "Column {data-width=600}"

r "Measurement {.no-title}"

What is the ICC

The ICC is the intraclass correlation coefficient. It provides a way to measure the correlation of data within measurement units (classes). For interrater reliability checks, it estimates how similar raters' scores are within each measurement unit. See the interpretation section below for a longer description of the ICC.

What settings should I choose?

Repeated raters (two-way) vs. unique raters (one-way)

In the one-way model, the data are repeated or grouped in just one way (participants who have multiple ratings). @hallgren2012 elaborates:

If a different set of coders is randomly selected from a larger population of coders for each subject then the researcher must use a one-way model. This is called “one-way” because the new random sample of coders for each subject prevents the ICC from accounting for systematic deviations due to specific coders [...] or two-way coder × subject interactions [...].

In the two-way case, where some raters evaluated multiple participants, there is another decision one can make: Whether the estimated reliability should generalize to new raters (two-way random) or raters should be treated as fixed (two-way mixed). This app does not support the latter option, but it is supported by the psych::ICC() function.

In general, if a new person---a new student, clinician, etc.---could be trained to perform the rating task, the two-way random is a reasonable default. See how Koo and Li [-@koo2016] emphasize generalizability for two-way random models:

If we randomly select our raters from a larger population of raters with similar characteristics, 2-way random-effects model is the model of choice. In other words, we choose 2-way random-effects model if we plan to generalize our reliability results to any raters who possess the same characteristics as the selected raters in the reliability study.
Single rating vs. Average rating

Yoder and Symons [-@2010observational] spell out the difference:

The "single measure" ICC is the interobserver reliability estimate for a single observer for the relevant variable. [...] [The] "average measure" ICC is that for the average of the observers' estimates for the variable. It is only appropriate to use the latter when the investigator has all sessions coded by more than one observer and uses the average score across observers as the variable score to answer the research question. [p. 177]

Note that changing from single-rating to average-rating reliability means that we cannot talk about single scores being reliable---we only know about the reliability of the averages. Shrout and Fleiss [-@icc1979], in their landmark survey of ICC types, illustrates this point:

Sometimes the choice of a unit of analysis causes a conflict between reliability considerations and substantive interpretations. A mean of *k* ratings might be needed for reliability, but the generalization of interest might be individuals. For example, Bayes (1972) desired to relate ratings of interpersonal warmth to nonverbal communication variables. She reported the reliability of the warmth ratings based on the judgments of 30 observers on 15 targets. Because the rating variable that she related to the other variables was the mean rating over all 30 observers, she correctly reported the reliability of the mean ratings. With this index, she found that her mean ratings were reliable to .90. When she interpreted her findings, however, she generalized to single observers, not to other groups of 30 observers. This generalization may be problematic, since the reliability of the individual ratings was less than .30---a value the investigator did not report. In such a situation in which the unit of analysis is not the same as the unit generalized to, it is a good idea to report the reliabilities of both units
Agreement vs. Consistency

Suppose you have two teachers, and one of the them has a reputation of being a hard grader. They both grade the same five students. The hard grader gives scores of 60%, 68%, 78%, 80%, 85%, and the other teacher gives scores of 80%, 88%, 98%, 100%, 100%. They give the students the same rankings, but they differ in their average score. Because the teachers each rate more than one student, this is a two-way model, and because in most contexts, each student ever one receives one grade on an assignment, we want to know single score reliability. The consistency-based ICC score is ICC(C,1) = .97, so they are almost perfectly reliable at ranking the students. The absolute-agreement based score is ICC(A,1) = .32, so it is very difficult to compare ratings between the teachers. In order to interpret a score, you would want to know who the teacher was or you would want to have the teachers grade on curve (i.e., renormalize them to remove the average score from each teacher).

In the one-way model, where every rating is done by a unique rater, there is no way to assess whether one rater is a consistently harder scorer compared to another rater. The only differences are absolute differences from each other. Therefore, only absolute agreement is available for one-way models.

Examples from our work

Intelligibility ratings. We have naive listeners transcribe recordings of children. Each child is transcribed by two unique listeners; listeners ever only hear one child. We combine these transcriptions into a single score. This situation requires a one-way, agreement-based, average rating ICC.

Language sample coding. We have two students in the lab transcribe interactions between a parent and child. We want to know whether the word counts or utterance counts are similar between transcribers. As a reliability check, both students transcribe the same subset of data, but the eventual analysis on the larger data will use just one transcription per child. This situations requires a two-way, agreement-based, single rating ICC.

Generally speaking, for an interrater reliability "check" situation---where multiple raters score a subset of the overall data but most of the data was scored by just one rater---use single rating.

Interpreting ICC scores

We are estimating the similarity of repeated measurements

The ICC is the intraclass correlation coefficient. It provides a way to measure the correlation of data within measurement units (classes). For example, suppose we give the same assessment to the same 10 children on three occasions. In general, we would want the scores to be correlated within each child, so that a child attains a similar score on each occasion. The ICC estimates this correlation.

Now, let's change the example from children tested three times to children who visit a research lab once but have their data scored by three different raters (or judges or coders). Then the ICC would measure how similar the scores are within children. If scores are very similar within children, then the differences between judges are small and the judges have high agreement with each other. This is how ICC works as a measure of interrater reliability.

It compares variation in participants to the overall variation in ratings

The ICC shows up frequently in the literature on multilevel or repeated measurement data. Think of children nested in different classrooms or experimental trials nested in a participant. I mention this context because that's the frame of reference for the texts I quote from.

Snijders and Bosker [-@1999multilevel], thinking about individuals nested in groups, provide two interpretations:

[The ICC] can be interpreted in two ways: it is the correlation between two randomly drawn individuals in one randomly drawn group [i.e, two randomly drawn measures from the same, randomly selected measurement unit], and it also the fraction of total variability that is due to the group level. [p. 48]

In general, the second interpretation is the more common one, and most definitions of ICC talk about the variation between groups versus the variation within groups. So, when Snijders and Bosker say "fraction of total variability", they have a fraction like the following in mind:

r "$$" \ \frac{\text{between-group variation}}{\text{total variation}} = \frac{\text{between-group variation}}{\text{between-group variation} + \text{within-group variation}} r "$$" \

In the context of interrater reliability, the groups are the participants who are being rated. Between-group variation is how the participants differ from each other, and within-group variation is how the ratings differ within the participants (i.e., between raters).

Technically, the actual fractions to compute ICC scores are more involved than the one above, as they account for between-rater variation. Still, Yoder and Symons [-@2010observational] support the ICC-as-a-proportion interpretation for interrater reliability:

[The] conceptual meaning of the ICC when measuring interobserver agreement is the proportion of the variability in the reliability sample that is due to between-participant variance in true score estimates of the behavior of interest (Shavelson & Webb, 1991). [p. 175]

Kreft and De Leeuw [-@kreft1998introducing], thinking about individuals nested in groups, do a good job explaining what it means for between-group variation to be low compared to within-group variation; that is, what a low ICC means:

If this intra-class correlation is high, groups are homogeneous and/or very different from each other. [...] In general, it is true that if this intra-class correlation is low, groups are only slightly different from each other. If the intra-class correlation is so low that is equal to zero, no group differences exist for the variables of interest. People from the same group are as different from each other on these variables as people across groups are. A zero intra-class correlation means that clustering of the data has no consequences for the relationship between the variables of interest [...]. [p. 4]

If your reliability check is showing low ICC scores, the differences between the judges' ratings are so large that they might as be comparing their ratings for different participants.


References

r "Column {data-width=300}"



tjmahr/iccbot documentation built on Dec. 2, 2020, 3:30 a.m.