epiforecasts/scoringutils: Utilities for Scoring and Assessing Predictions

Provides a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) <doi:10.1198/016214506000001437>, Jordan, A., Krüger, F., & Lerch, S. (2019) <doi:10.18637/jss.v090.i12>) within a consistent framework for evaluation, comparison and visualisation of forecasts. In addition to proper scoring rules, functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) <doi:10.1371/journal.pcbi.1006785>) of forecasts. Several types of predictions (e.g. binary, discrete, continuous) which may come in different formats (e.g. forecasts represented by predictive samples or by quantiles of the predictive distribution) can be evaluated. Scoring metrics can be used either through a convenient data.frame format, or can be applied as individual functions in a vector / matrix format. All functionality has been implemented with a focus on performance and is robustly tested.

Getting started

Package details

Maintainer
LicenseMIT + file LICENSE
Version1.1.3
URL https://epiforecasts.io/scoringutils/ https://github.com/epiforecasts/scoringutils
Package repositoryView on GitHub
Installation Install the latest version of this package by entering the following in R:
install.packages("remotes")
remotes::install_github("epiforecasts/scoringutils")
epiforecasts/scoringutils documentation built on April 30, 2023, 11:41 a.m.