This update represents a major rewrite of the package and introduces breaking changes. If you want to keep using the older version, you can download it using remotes::install_github("epiforecasts/scoringutils@v1.2")
.
The update aims to make the package more modular and customisable and overall cleaner and easier to work with. In particular, we aimed to make the suggested workflows for evaluating forecasts more explicit and easier to follow (see visualisation below). To do that, we clarified input formats and made them consistent across all functions. We refactord many functions to S3-methods and introduced forecast
objects with separate classes for different types of forecasts. A new function, as_forecast()
was introduced to validate the data and convert inputs into a forecast
object. Another major update is the possibility for users to pass in their own scoring functions into score()
. We updated and improved all function documentation and added new vignettes to guide users through the package. Internally, we refactored the code, improved input checks, updated notifications (which now use the cli
package) and increased test coverage.
The most comprehensive documentation for the new package after the rewrite is the revised version
of our original scoringutils
paper.
score()
score()
. However, we reworked the function and updated and clarified its input requirements. score()
now requires columns called "observed" and "predicted" and "model". The column quantile
was renamed to quantile_level
and sample
was renamed to sample_id
score()
is now a generic. It has S3 methods for the classes forecast_point
, forecast_binary
, forecast_quantile
and forecast_sample
, which correspond to the different forecast types that can be scored with scoringutils
. score()
now calls na.omit()
on the data, instead of only removing rows with missing values in the columns observed
and predicted
. This is because NA
values in other columns can also mess up e.g. grouping of forecasts according to the unit of a single forecast.score()
and many other functions now require a validated forecast
object. forecast
objects can be created using the function as_forecast()
(which replaces the previous check_forecast()
).
score()
now returns objects of class scores
with a stored attribute metrics
that holds the names of the scoring rules that were used. Users can call get_metrics()
to access the names of those scoring rules.score()
now returns one score per forecast, instead of one score per sample or quantile.metrics
argument, which takes in a named list of functions). Default scoring rules can be accessed using the functions metrics_point()
, metrics_sample()
, metrics_quantile()
and metrics_binary()
, which return a named list of scoring rules suitable for the respective forecast type. Column names of scores in the output of score()
correspond to the names of the scoring rules (i.e. the names of the functions in the list of metrics).score()
to manipulate individual scoring rules users should now manipulate the metric list being supplied using customise_metric()
and select_metric()
.as_forecast()
creates a forecast object and validates it. as_forecast()
also allows users to rename/specify required columns and specify the forecast unit in a single step, taking over the functionality of set_forecast_unit()
in most cases. - Overall, we updated the suggested workflows for how users should work with the package. The following gives an overview (see the [updated paper](https://drive.google.com/file/d/1URaMsXmHJ1twpLpMl1sl2HW4lPuUycoj/view?usp=drive_link) for more details).
![package workflows](./man/figures/workflow.png)
### Input formats
- We standardised input formats both for `score()` as well as for the scoring rules exported by `scoreingutils`. The following plot gives a overview of the expected input formats for the different forecast types in `score()`.
![input formats](./man/figures/required-inputs.png)
- Support for the interval format was mostly dropped (see PR #525 by @nikosbosse and reviewed by @seabbs). The co-existence of the quantile and interval format let to a confusing user experience with many duplicated functions providing the same functionality. We decided to simplify the interface by focusing exclusively on the quantile format.
- The function `bias_range()` was removed (users should now use `bias_quantile()` instead)
- The function `interval_score()` was made an internal function rather than being exported to users. We recommend using `wis()` instead.
### (Re-)Validating forecast objects
- To create and validate a new `forecast` object, users can use `as_forecast()`. To revalidate an existing `forecast` object users can call `assert_forecast()` (which validates the input and returns `invisible(NULL)`. `assert_forecast()` is a generic with methods for the different forecast types. Alternatively, `validate_forecast()` can be used (which calls `assert_forecast()`), which returns the input and is useful in a pipe. Lastly, users can simply print the object to obtain additional information.
- Users can test whether an object is of class `forecast_*()` using the function `is_forecast()`. Users can also test for a specific `forecast_*` class using the appropriate `is_forecast.forecast_*` method. For example, to check whether an object is of class `forecast_quantile`, you would use you would use `scoringutils:::is_forecast.forecast_quantile()`.
### Pairwise comparisons and relative skill
- The functionality for computing pairwise comparisons was now split from `summarise_scores()`. Instead of doing pairwise comparisons as part of summarising scores, a new function, `add_relative_skill()`, was introduced that takes summarised scores as an input and adds columns with relative skill scores and scaled relative skill scores.
- The function `pairwise_comparison()` was renamed to `get_pairwise_comparisons()`, in line with other `get_`-functions. Analogously, `plot_pairwise_comparison()` was renamed to `plot_pairwise_comparisons()`.
- Output columns for pairwise comparisons have been renamed to contain the name of the metric used for comparing.
- Replaced warnings with errors in `get_pairwise_comparison` to avoid returning `NULL`
### Computing coverage values
- `add_coverage()` was replaced by a new function, `get_coverage()`. This function comes with an updated workflow where coverage values are computed directly based on the original data and can then be visualised using `plot_interval_coverage()` or `plot_quantile_coverage()`. An example workflow would be `example_quantile |> as_forecast() |> get_coverage(by = "model") |> plot_interval_coverage()`.
### Obtaining and plotting forecast counts
- The function `avail_forecasts()` was renamed to `get_forecast_counts()`. This represents a change in the naming convention where we aim to name functions that provide the user with additional useful information about the data with a prefix "get_". Sees Issue #403 and #521 and PR #511 by @nikosbosse and reviewed by @seabbs for details.
- For clarity, the output column in `get_forecast_counts()` was renamed from "Number forecasts" to "count".
- `get_forecast_counts()` now also displays combinations where there are 0 forecasts, instead of silently dropping corresponding rows.
- `plot_avail_forecasts()` was renamed `plot_forecast_counts()` in line with the change in the function name. The `x` argument no longer has a default value, as the value will depend on the data provided by the user.
### Renamed functions
- The function `find_duplicates()` was renamed to `get_duplicate_forecasts()`
- Renamed `interval_coverage_quantile()` and `interval_coverage_dev_quantile()` to `interval_coverage()` and `interval_coverage_deviation()`, respectively.
- "range" was consistently renamed to "interval_range" in the code. The "range"-format (which was mostly used internally) was renamed to "interval"-format
- Renamed `correlation()` to `get_correlations()` and `plot_correlation()` to `plot_correlations()`
- `pit()` was renamed to `get_pit()`.
### Deleted functions
- Removed abs_error and squared_error from the package in favour of `Metrics::ae` and `Metrics::se`.
- Deleted the function `plot_ranges()`. If you want to continue using the functionality, you can find the function code [here](https://github.com/epiforecasts/scoringutils/issues/462) or in the Deprecated-visualisations Vignette.
- Removed the function `plot_predictions()`, as well as its helper function `make_NA()`, in favour of a dedicated Vignette that shows different ways of visualising predictions. For future reference, the function code can be found [here](https://github.com/epiforecasts/scoringutils/issues/659) (Issue #659) or in the Deprecated-visualisations Vignette.
- Removed the function `plot_score_table()`. You can find the code in the Deprecated-visualisations Vignette.
- Removed the function `merge_pred_and_obs()` that was used to merge two separate data frames with forecasts and observations. We moved its contents to a new "Deprecated functions"-vignette.
- Removed `interval_coverage_sample()` as users are now expected to convert to a quantile format first before scoring.
### Function changes
- `bias_quantile()` changed the way it handles forecasts where the median is missing: The median is now imputed by linear interpolation between the innermost quantiles. Previously, we imputed the median by simply taking the mean of the innermost quantiles.
### Internal package updates
- The deprecated `..density..` was replaced with `after_stat(density)` in ggplot calls.
- Files ending in ".Rda" were renamed to ".rds" where appropriate when used together with `saveRDS()` or `readRDS()`.
- Added a subsetting `[` operator for scores, so that the score name attribute gets preserved when subsetting.
- Switched to using `cli` for condition handling and signalling, and added tests for all the `check_*()` and `test_*()` functions. See #583 by @jamesmbaazam and reviewed by @nikosbosse and @seabbs.
### Documentation and testing
- Updates documentation for most functions and made sure all functions have documented return statements
- Documentation pkgdown pages are now created both for the stable and dev versions.
- Added unit tests for many functions
# scoringutils 1.2.2
## Package updates
- `scoringutils` now depends on R 3.6. The change was made since packages `testthat` and `lifecycle`, which are used in `scoringutils` now require R 3.6. We also updated the Github action CI check to work with R 3.6 now.
- Added a new PR template with a checklist of things to be included in PRs to facilitate the development and review process
## Bug fixes
- Fixes a bug with `set_forecast_unit()` where the function only worked with a data.table, but not a data.frame as an input.
- The metrics table in the vignette [Details on the metrics implemented in `scoringutils`](https://epiforecasts.io/scoringutils/articles/metric-details.html) had duplicated entries. This was fixed by removing the duplicated rows.
# scoringutils 1.2.1
## Package updates
- This minor update fixes a few issues related to gh actions and the vignettes displayed at epiforecasts.io/scoringutils. It
- Gets rid of the preferably package in _pkgdown.yml. The theme had a toggle between light and dark theme that didn't work properly
- Updates the gh pages deploy action to v4 and also cleans up files when triggered
- Introduces a gh action to automatically render the Readme from Readme.Rmd
- Removes links to vignettes that have been renamed
# scoringutils 1.2.0
This major release contains a range of new features and bug fixes that have been introduced in minor releases since `1.1.0`. The most important changes are:
- Documentation updated to reflect changes since version 1.1.0, including new transform and workflow functions.
- New `set_forecast_unit()` function allows manual setting of forecast unit.
- `summarise_scores()` gains new `across` argument for summarizing across variables.
- New `transform_forecasts()` and `log_shift()` functions allow forecast transformations. See the documentation for `transform_forecasts()` for more details and an example use case.
- Input checks and test coverage improved for bias functions.
- Bug fix in `get_prediction_type()` for integer matrix input.
- Links to scoringutils paper and citation updates.
- Warning added in `interval_score()` for small interval ranges.
- Linting updates and improvements.
Thanks to @nikosbosse, @seabbs, and @sbfnk for code and review contributions. Thanks to @bisaloo for the suggestion to use a linting GitHub Action that only triggers on changes, and @adrian-lison for the suggestion to add a warning to `interval_score()` if the interval range is between 0 and 1.
## Package updates
- The documentation was updated to reflect the recent changes since `scoringutils 1.1.0`. In particular, usage of the functions `set_forecast_unit()`, `check_forecasts()` and `transform_forecasts()` are now documented in the Vignettes. The introduction of these functions enhances the overall workflow and help to make the code more readable. All functions are designed to be used together with the pipe operator. For example, one can now use something like the following:
```r
example_quantile |>
set_forecast_unit(c("model", "location", "forecast_date", "horizon", "target_type")) |>
check_forecasts() |>
score()
Documentation for the transform_forecasts()
has also been extended. This functions allows the user to easily add transformations of forecasts, as suggested in the paper "Scoring epidemiological forecasts on transformed scales". In an epidemiological context, for example, it may make sense to apply the natural logarithm first before scoring forecasts, in order to obtain scores that reflect how well models are able to predict exponential growth rates, rather than absolute values. Users can now do something like the following to score a transformed version of the data in addition to the original one:
data <- example_quantile[true_value > 0, ]
data |>
transform_forecasts(fun = log_shift, offset = 1) |>
score() |>
summarise_scores(by = c("model", "scale"))
Here we use the log_shift()
function to apply a logarithmic transformation to the forecasts. This function was introduced in scoringutils 1.1.2
as a helper function that acts just like log()
, but has an additional argument offset
that can add a number to every prediction and observed value before applying the log transformation.
check_forecasts()
and score()
pipeable (see issue #290). This means that
users can now directly use the output of check_forecasts()
as input for
score()
. As score()
otherwise runs check_forecasts()
internally anyway
this simply makes the step explicit and helps writing clearer code.Release by @seabbs in #305. Reviewed by @nikosbosse and @sbfnk.
prediction_type
argument of get_forecast_unit()
has been changed dropped. Instead a new internal function prediction_is_quantile()
is used to detect if a quantile variable is present. Whilst this is an internal function it may impact some users as it is accessible via `find_duplicates().bias_range()
and bias_quantile()
more obvious to the user as this may cause unexpected behaviour.bias_range()
so that it uses bias_quantile()
internally.bias_range()
, bias_quantile()
, and check_predictions()
to make sure that the input is valid.bias_range()
, bias_quantile()
, and bias_sample()
.get_prediction_type()
which led to it being unable to correctly detect integer (instead categorising them as continuous) forecasts when the input was a matrix. This issue impacted bias_sample()
and also score()
when used with integer forecasts resulting in lower bias scores than expected.across
, to summarise_scores()
. This argument allows the user to summarise scores across different forecast units as an alternative to specifying by
. See the documentation for summarise_scores()
for more details and an example use case.set_forecast_unit()
that allows the user to set the forecast unit manually. The function removes all columns that are not relevant for uniquely identifying a single forecast. If not done manually, scoringutils
attempts to determine the unit of a single automatically by simply assuming that all column names are relevant to determine the forecast unit. This can lead to unexpected behaviour, so setting the forecast unit explicitly can help make the code easier to debug and easier to read (see issue #268). When used as part of a workflow, set_forecast_unit()
can be directly piped into check_forecasts()
to check everything is in order.interval_score()
if the interval range is between 0 and 1. Thanks to @adrian-lison (see #277) for the suggestion.epinowcast
package.epinowcast
package.transform_forecasts()
to make it easy to transform forecasts before scoring them, as suggested in Bosse et al. (2023), https://www.medrxiv.org/content/10.1101/2023.01.23.23284722v1.log_shift()
that implements the default transformation function. The function allows to add an offset before applying the logarithm.interval_score()
which explicitly converts the logical vector to a numeric one. This should happen implicitly anyway, but is now done explicitly in order to avoid issues that may come up if the input vector has a type that doesn't allow the implicit conversion. A minor update to the package with some bug fixes and minor changes.
1.0.0
.metric
argument of summarise_scores()
to relative_skill_metric
. This argument is now deprecated and will be removed in a future version of the package. Please use the new argument instead.score()
and related functions to make the soft requirement for a model
column in the input data more explicit.score()
, pairwise_comparison()
and summarise_scores()
to make it clearer what the unit of a single forecast is that is required for computationsplot_pairwise_comparison()
which now only supports plotting mean score ratios or p-values and removed the hybrid option to print both at the same time.pairwise_comparison()
now trigger an explicit and informative error message.sample
column when using a quantile forecast format. Previously this resulted in an error.Major update to the package and most package functions with lots of breaking changes.
eval_forecasts()
was replaced by a function score()
with a much reduced set of function arguments.summarise_scores()
check_forecasts()
to analyse input data before scoringcorrelation()
to compute correlations between different metricsadd_coverage()
to add coverage for specific central prediction intervals.avail_forecasts()
allows to visualise the number of available forecasts.find_duplicates()
to find duplicate forecasts which cause an error.plot_
. Arguments were
simplified.pit()
now works based on data.frames. The old pit
function
was renamed to pit_sample()
. PIT p-values were removed entirely.plot_pit()
now works directly with input as produced by pit()
score()
were
restricted to sample-based, quantile-based or binary forecasts.brier_score()
now returns all brier scores, rather than taking
the mean before returning an output.crps()
, dss()
and logs()
were renamed to crps_sample()
, dss_sample()
, and
logs_sample()
sample_to_quantile()
function
(https://github.com/epiforecasts/scoringutils/pull/223)example_
.summary_metrics
was included that contains a summary of the metrics implemented in scoringutils
.check_forecasts()
that runs some basic checks on the
input data and provides feedback.table[]
rather than as table
, such that they don't have to be called twice to display the contents.pairwise_comparison()
that runs pairwise comparisons between models on the output of eval_forecasts()
eval_forecasts()
.eval_forecasts()
can now handle a separate forecast and truth data set as
as input.eval_forecasts()
now supports scoring point forecasts along side quantiles
in a quantile-based format. Currently the only metric used is the absolute error.eval_forecasts()
got a major rewrite. While
functionality should be unchanged, the code should now be easier to maintaincount_median_twice = FALSE
.score
.correlation_plot()
shows correlation between metrics.plot_ranges()
shows contribution of different prediction intervals to some chosen metric.plot_heatmap()
visualises scores as heatmap.plot_score_table()
shows a coloured summary table of scores.score
now has a slightly changed meaning. It now denotes the lowest possible grouping unit, i.e. the unit of one observation and needs to be specified explicitly. The default is now NULL
. The reason for
this change is that most metrics need scoring on the observation level and this the most consistent implementation of this principle. The pit function receives
its grouping now from summarise_by
. In a similar spirit, summarise_by
has to
be specified explicitly and e.g. doesn't assume anymore that you want 'range'
to be included.weigh = TRUE
is now the default option.score
. Bias as well as calibration now take all quantiles into account.summarise_by
argument in score()
The summary can return the mean, the standard deviation as well
as an arbitrary set of quantiles.score()
can now return pit histograms.ggplot2
for plotting.Interval_score
is now interval_score
, CRPS
is now crps
etc.score()
.score()
score()
: bias, sharpness and calibrationscore()
.score()
.README
.Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.