Provides a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) <doi:10.1198/016214506000001437>, Jordan, A., Krüger, F., & Lerch, S. (2019) <doi:10.18637/jss.v090.i12>) within a consistent framework for evaluation, comparison and visualisation of forecasts. In addition to proper scoring rules, functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) <doi:10.1371/journal.pcbi.1006785>) of forecasts. Several types of predictions (e.g. binary, discrete, continuous) which may come in different formats (e.g. forecasts represented by predictive samples or by quantiles of the predictive distribution) can be evaluated. Scoring metrics can be used either through a convenient data.frame format, or can be applied as individual functions in a vector / matrix format. All functionality has been implemented with a focus on performance and is robustly tested.
Package details |
|
---|---|
Author | Nikos Bosse [aut, cre] (<https://orcid.org/0000-0002-7750-5280>), Sam Abbott [aut] (<https://orcid.org/0000-0001-8057-8037>), Hugo Gruson [aut] (<https://orcid.org/0000-0002-4094-1476>), Johannes Bracher [ctb] (<https://orcid.org/0000-0002-3777-1410>), Sebastian Funk [ctb] |
Maintainer | Nikos Bosse <nikosbosse@gmail.com> |
License | MIT + file LICENSE |
Version | 1.1.0 |
URL | https://epiforecasts.io/scoringutils/ https://github.com/epiforecasts/scoringutils |
Package repository | View on CRAN |
Installation |
Install the latest version of this package by entering the following in R:
|
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.