iccNA: Intraclass correlation coefficients (ICCs) - generalized for...

iccNAR Documentation

Intraclass correlation coefficients (ICCs) – generalized for randomly incomplete datasets

Description

This function computes intraclass correlation coefficients (ICCs) as indices of interrater reliability or agreement based on cardinally scaled data. This function also works on (unbalanced) incomplete datasets without any imputation of missing values (NAs) or (row- or cloumn-wise) omissions of data! p-values and confidence intervals are provided. In case of extreme input data (e.g. zero variances) output NaNs are avoided by approximation.

Usage

iccNA(ratings, rho0 = 0, conf = 0.95, detail = FALSE, oneG = TRUE, Cs = 10000)

Arguments

ratings

n*m matrix or data frame; n objects (rows), m raters (columns)

rho0

numeric value; correlation in population (ρ) according to the null hypothesis (0 is default)

conf

numeric value; confidence level (95% is default)

detail

logical; if TRUE (FALSE is default), returns additional information (sums of squares, degrees of freedom, the means per object, data corrected for the raters' biases)

oneG

logical; if TRUE (default), the ipsation (correction for the raters' effects) is done the simple way, using the difference of each raters mean to the one grand mean (G) of all values to estimate the raters' biases. If FALSE the weighted sub-means (G_j) of those objects that an individual rater j rated are used instead (cp. Brueckl, 2011, Equation 4.30).

Cs

numeric value; denominator (10000 is default) of the effect-size-criterion to stop iteration of the correction for the raters' biases; the enumerator denotes a small effect (η-squared = 1%)

Details

This function is able to compute ICCs on randomly incomplete (i.e. unbalanced) data sets. Thus, both an imputation of missing values (NAs) and row-wise or column-wise omissions of data are obsolete. Working on complete datasets, it yields the same results as the common functions, e.g. icc_corr.
The method of Ebel (1951) is used to calculate the oneway ICCs. The solution for the twoway ICCs is derived from the oneway solution (cp. Brueckl, 2011, p. 96 ff.): The raters' individual effects (biases) are estimated, reducing this problem again to the oneway problem (cp. Greer & Dunlap, 1997).
This estimation can be done using the difference of a certain (j) rater's mean to the grand mean (G) or to the sub-mean (G_j) representing only those objects that were rated by this rater. The first method is fail-safe. The second method is thought to provide the more precise estimates (of the raters' biases), the more the mean of the true values of the objects that each rater rated differ from the grand mean, e.g. if there are raters that only rate objects with low true values (and therefore also other raters that only rate objects with high true values).
If the second method is chosen and if the ratings are unbalanced, which happens most of the time if not all raters rated all objects, the raters' biases cannot be determined exactly – but as approximately as desired. This approximation needs an iteration, thus a stop criterion (Cs): The iteration is stopped, when the difference in the raters' effect size (η-squared) between subsequent iterations would be equal to or smaller than the Csth part of a small effect (i.e. η-squared = 1%).

Just as in icc_corr and icc, the designation established by McGraw & Wong (1996) – A for absolute agreement and C for consistency – is used to differ between the (twoway) ICCs that rely on different cases and thus must be interpreted differently.

The generalization of the procedure entails a generalization of the three cases that differentiate the ICCs (cp. Shrout & Fleiss, 1979):
- Case 1 (oneway case, treated by ICC(1) and ICC(k)):
Each object – of a sample that was randomly drawn from the population of objects; also holds true for case 2 and case 3 – is rated by (a different number of) different raters that were randomly drawn from the population of raters.
- Case 2 (twoway case, treated by ICC(A,1) and ICC(A,k)):
Each object is rated by a random subset of the group of raters that is drawn randomly from the population of raters.
- Case 3 (twoway case, treated by ICC(C,1) and ICC(C,k)):
Each object is rated by a random subset of the group of all relevant (i.e. fixed) raters.

Output NaNs, that usually occur (see e.g. icc or icc_corr) in case of extreme input data (e.g. in case of zero variance(s), within or between objects) are avoided by approximation from little less extreme input data. Warning messages are given in these cases.

Value

ICCs

data frame containing the intraclass correlation coefficients, the corresponding p-values, and confidence intervals

n

number of rated objects

k

maximum number of raters per object

amk

mean number of ratings per object

k_0

approximate harmonic mean (cp. Ebel, 1951) of the number of ratings per object

n_iter

number of iterations for correcting for the raters' biases

corr_ratings

ratings, corrected for the individual raters' biases

amO

means of ratings for each object, based on (1) the original data and on (2) the data that are corrected for the raters' biases

oneway

statistics for the oneway ICCs

twoway

statistics for the twoway ICCs

Author(s)

Markus Brueckl

References

Brueckl, M. (2011). Statistische Verfahren zur Ermittlung der Urteileruebereinstimmung. in: Altersbedingte Veraenderungen der Stimme und Sprechweise von Frauen, Berlin: Logos, 88–103.

Ebel, R.L. (1951). Estimation of the reliability of ratings. Psychometrika, 16(4), 407–424.

Greer, T., & Dunlap, W.P. (1997). Analysis of variance with ipsative measures. Psychological Methods, 2, 200–207.

McGraw, K.O., & Wong, S.P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods, 1, 30–46.

Shrout, P.E., & Fleiss, J.L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428.

See Also

kendallNA, icc_corr, icc

Examples

# Example 1:
data(ConsistNA)
# ConsistNA exhibits missing values, a perfect consistency, and 
# a moderate agreement between raters:
ConsistNA
# Common ICC-algorithms fail, since each row as well as each 
# column of ConsistNA exhibits unfilled cells and these missing 
# data are omitted column-wise or row-wise (please install and  
# load the irr package):
#icc(ConsistNA, r0=0.3)
# Ebel's (1951) method for computing ICC(1) and ICC(k) that is 
# implemented in iccNA can cope with such data without an 
# omission or an imputation of missing values, but still can 
# not depict the raters' interdependency...
iccNA(ConsistNA, rho0=0.3)
# ...but generalizations of Ebel's method for the twoway ICCs 
# are able to assess moderate agreement (ICC(A,1) and ICC(A,k)) 
# and perfect consistency (ICC(C,1) and ICC(C,k)), assuming that 
# the data were acquired under case 2 or case 3, see Details in 
# the Help file.
#
# Example 2:
data(IndepNA)
# IndepNA exhibits missing values and zero variance between 
# the raters just as well as between the objects:
IndepNA
# Again, common ICC-algorithms fail (cp. irr package):
#icc(IndepNA)
# But iccNA is able to include all available data in its 
# calculation and thereby to show the perfect independence of 
# the ratings:
iccNA(IndepNA)
#
# Example 3:
# The example provided by Ebel (1951, Tables 2 and 3):
# data(Ebel51)
Ebel51
# iCCNA achieves to include all available ratings and to assess 
# twoway ICCs, assuming that the data were acquired under 
# case 2 or case 3:
iccNA(Ebel51, detail=TRUE)

irrNA documentation built on April 5, 2022, 1:12 a.m.