forecast_comparison: Compare forecast accuracy

Description Usage Arguments Value Examples

View source: R/forecast_metrics.R

Description

A function to compare forecasts. Options include: simple forecast error ratios, Diebold-Mariano test, and Clark and West test for nested models

Usage

1
2
3
4
5
6
7
forecast_comparison(
  Data,
  baseline.forecast,
  test = "ER",
  loss = "MSE",
  horizon = NULL
)

Arguments

Data

data.frame: data frame of forecasts, model names, and dates

baseline.forecast

string: column name of baseline (null hypothesis) forecasts

test

string: which test to use; ER = error ratio, DM = Diebold-Mariano, CM = Clark and West

loss

string: error loss function to use if creating forecast error ratio

horizon

int: horizon of forecasts being compared in DM and CW tests

Value

numeric test result

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
 # simple time series
 A = c(1:100) + rnorm(100)
 date = seq.Date(from = as.Date('2000-01-01'), by = 'month', length.out = 100)
 Data = data.frame(date = date, A)

 # run forecast_univariate
 forecast.uni =
   forecast_univariate(
     Data = Data,
     forecast.dates = tail(Data$date,10),
     method = c('naive','auto.arima', 'ets'),
     horizon = 1,
     recursive = FALSE,
     freq = 'month')

 forecasts =
   dplyr::left_join(
     forecast.uni,
     data.frame(date, observed = A),
     by = 'date'
   )

 # run ER (MSE)
 er.ratio.mse =
   forecast_comparison(
     forecasts,
     baseline.forecast = 'naive',
     test = 'ER',
     loss = 'MSE')

OOS documentation built on March 17, 2021, 5:08 p.m.