View source: R/pairwise-comparisons.R
pairwise_comparison_one_group | R Documentation |
This function does the pairwise comparison for one set of forecasts, but
multiple models involved. It gets called from get_pairwise_comparisons()
.
get_pairwise_comparisons()
splits the data into arbitrary subgroups
specified by the user (e.g. if pairwise comparison should be done separately
for different forecast targets) and then the actual pairwise comparison for
that subgroup is managed from pairwise_comparison_one_group()
. In order to
actually do the comparison between two models over a subset of common
forecasts it calls compare_two_models()
.
pairwise_comparison_one_group(scores, metric, baseline, by, ...)
scores |
An object of class |
metric |
A string with the name of the metric for which a relative skill shall be computed. By default this is either "crps", "wis" or "brier_score" if any of these are available. |
baseline |
A string with the name of a model. If a baseline is
given, then a scaled relative skill with respect to the baseline will be
returned. By default ( |
by |
Character vector with column names that define the grouping level
for the pairwise comparisons. By default ( |
... |
Additional arguments for the comparison between two models. See
|
A data.table with the results of pairwise comparisons
containing the mean score ratios (mean_scores_ratio
),
unadjusted (pval
) and adjusted (adj_pval
) p-values, and relative skill
values of each model (..._relative_skill
). If a baseline model is given
then the scaled relative skill is reported as well
(..._scaled_relative_skill
).
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.