boot.many2 | R Documentation |
This function performs Hotelling's T square test using a variance-covariance matrix based on the bootstrap method to compare dependent multi-observers kappa coefficients
boot.many2(cluster_id, data, group, method = "fleiss", a.level = 0.05, ITN = 1000, summary_k = T)
cluster_id |
a vector of lenght N with the identification number of the clusters |
data |
a N x R matrix representing the classification of the N items by the R observers. |
group |
a vector with G elements indicating how many observers are considered to compute each kappa coefficient. For example, c(3,5) means that a kappa coefficient between the first 3 observersand a kappa coefficient between the 5 last observers will be computed |
method |
The type of kappa to be computed. 'light' for Light's kappa coefficient, 'conger' for Conger's kappa coefficient and 'fleiss' for Fleiss' kappa coefficient. In case of Fleiss kappa, it works only if each item is rated by the same number of observers. |
a.level |
the significance level |
ITN |
the number of bootstrap iterations |
summary_k, |
if true, Hotteling's T square test is performed, if false, only the bootstraped coefficients are returned |
This function compare several kappa coefficients for several observers using Hotelling's T square with the variance-covariance matrix obtained by the bootstrap method. If only one kappa is computed, it returns the estimate and confidence interval.
$kappa a G x 2 matrix with the kappa coefficients in the first column and their corresponding standard error in the second column
$T_test a vector of length 2 with the value of Hotelling's T test as first element and the corresponding p-value as second element
$confidence confidence intervals for the pairwise comparisons of the measures
$cor the G x G correlation matrix for the kappa coefficients
$K when summary_k is false, the ITN x G matrix with the bootstrapped kappa coefficients is returned
Sophie Vanbelle sophie.vanbelle@maastrichtuniversity.nl
Conger A.J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin 88, 322-328.
Fleiss J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin 76, 378-382.
Light R.J. (1971). Measures of response agreement for qualitative data: some generalizations and alternatives. Psychological Bulletin 76, 365-377.
Vanbelle S. and Albert A. (2008). A bootstrap method for comparing correlated kappa coefficients. Journal of Statistical Computation and Simulation, 1009-1015
Vanbelle S. Comparing dependent agreement coefficients obtained on multilevel data. submitted
#dataset (not multilevel) (Vanbelle and Albert, 2008) data(depression) attach(depression) a<-boot.many2(data=cbind(diag,BDI,GHQ,BDI,GHQ),cluster_id=ID,method='light',group=c(3,2),summary_k=TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.