| interrater_agreement_table | R Documentation |
Build an inter-rater agreement report
interrater_agreement_table(
fit,
diagnostics = NULL,
rater_facet = NULL,
context_facets = NULL,
exact_warn = 0.5,
corr_warn = 0.3,
include_precision = TRUE,
top_n = NULL
)
fit |
Output from |
diagnostics |
Optional output from |
rater_facet |
Name of the rater facet. If |
context_facets |
Optional context facets used to match observations for
agreement. If |
exact_warn |
Warning threshold for exact agreement. |
corr_warn |
Warning threshold for pairwise correlation. |
include_precision |
If |
top_n |
Optional maximum number of pair rows to keep. |
This helper computes pairwise rater agreement on matched contexts and returns both a pair-level table and a one-row summary. The output is package-native and does not require knowledge of legacy report numbering.
A named list with:
summary: one-row inter-rater summary
pairs: pair-level agreement table
settings: applied options and thresholds
summary: overall agreement level, number/share of flagged pairs.
pairs: pairwise exact agreement, correlation, and direction/size gaps.
settings: applied facet matching and warning thresholds.
Pairs flagged by both low exact agreement and low correlation generally deserve highest calibration priority.
Run with explicit rater_facet (and context_facets if needed).
Review summary(ir) and top flagged rows in ir$pairs.
Visualize with plot_interrater_agreement().
The pairs data.frame contains:
Rater pair identifiers.
Number of matched-context observations for this pair.
Proportion of exact score agreements.
Expected exact agreement under chance.
Proportion of adjacent (+/- 1 category) agreements.
Signed mean score difference (Rater1 - Rater2).
Mean absolute score difference.
Pearson correlation between paired scores.
Logical; TRUE when Exact < exact_warn or Corr < corr_warn.
Raw counts behind the agreement proportions.
The summary data.frame contains:
Name of the rater facet analyzed.
Number of rater pairs evaluated.
Mean exact agreement across all pairs.
Observed exact agreement minus expected exact agreement.
Mean pairwise correlation.
Count and proportion of flagged pairs.
Severity-spread indices for the rater facet, reported separately from agreement.
diagnose_mfrm(), facets_chisq_table(), plot_interrater_agreement(),
mfrmr_visual_diagnostics
toy <- load_mfrmr_data("example_core")
fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)
ir <- interrater_agreement_table(fit, rater_facet = "Rater")
summary(ir)
p_ir <- plot(ir, draw = FALSE)
class(p_ir)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.