evaluate.accuracy: Evaluate accuracy

View source: R/dfmip.R

evaluate.accuracyR Documentation

Evaluate accuracy

Description

Evaluate accuracy

Usage

evaluate.accuracy(
  models.to.run,
  forecast.distributions,
  forecast.targets,
  observations.df,
  n.draws,
  threshold = "default",
  percentage = "default"
)

Arguments

models.to.run

See dfmip.forecast

forecast.distributions

See return object 2 in dfmip.forecast

forecast.targets

See dfmip.forecast

observations.df

A data frame with five fields: location, year, location_year, forecast.target, and value. Value contains the observed value for the location and year for the corresponding forecast.target

n.draws

See dfmip.forecast

threshold

For continuous and discrete forecasts, a threshold of error to be used in classifying the forecast as "accurate". The default is +/- 1 human case, +/- 1 week, otherwise the default is 0.

percentage

For continuous and discrete forecasts, if the prediction is within the specified percentage of the observed value, the forecast is considered accurate. The default is +/- 25 percent of the observed.

Value

accuracy.summary A data frame organized by model, location (and an aggregation across all locations, currently denoted -STATEWIDE, but this does not have to be a state), forecast.target, with entries for the following evaluation metrics: CRPS, RMSE, Scaled_RMSE, percentage, threshold, percentage or threshold, and Area Under the Curve (AUC).


akeyel/dfmip documentation built on Sept. 3, 2022, 1:26 a.m.