| dm.test | R Documentation | 
The Diebold-Mariano test compares the forecast accuracy of two forecast methods.
dm.test(
  e1,
  e2,
  alternative = c("two.sided", "less", "greater"),
  h = 1,
  power = 2,
  varestimator = c("acf", "bartlett")
)
e1 | 
 Forecast errors from method 1.  | 
e2 | 
 Forecast errors from method 2.  | 
alternative | 
 a character string specifying the alternative hypothesis,
must be one of   | 
h | 
 The forecast horizon used in calculating   | 
power | 
 The power used in the loss function. Usually 1 or 2.  | 
varestimator | 
 a character string specifying the long-run variance estimator.
Options are   | 
This function implements the modified test proposed by Harvey, Leybourne and
Newbold (1997). The null hypothesis is that the two methods have the same
forecast accuracy. For alternative="less", the alternative hypothesis
is that method 2 is less accurate than method 1. For
alternative="greater", the alternative hypothesis is that method 2 is
more accurate than method 1. For alternative="two.sided", the
alternative hypothesis is that method 1 and method 2 have different levels
of accuracy. The long-run variance estimator can either the
auto-correlation estimator varestimator = "acf", or the estimator based
on Bartlett weights varestimator = "bartlett" which ensures a positive estimate.
Both long-run variance estimators are proposed in Diebold and Mariano (1995).
A list with class "htest" containing the following
components:
statistic | 
 the value of the DM-statistic.  | 
parameter | 
 the forecast horizon and loss function power used in the test.  | 
alternative | 
 a character string describing the alternative hypothesis.  | 
varestimator | 
 a character string describing the long-run variance estimator.  | 
p.value | 
 the p-value for the test.  | 
method | 
 a character string with the value "Diebold-Mariano Test".  | 
data.name | 
 a character vector giving the names of the two error series.  | 
George Athanasopoulos and Kirill Kuroptev
Diebold, F.X. and Mariano, R.S. (1995) Comparing predictive accuracy. Journal of Business and Economic Statistics, 13, 253-263.
Harvey, D., Leybourne, S., & Newbold, P. (1997). Testing the equality of prediction mean squared errors. International Journal of forecasting, 13(2), 281-291.
# Test on in-sample one-step forecasts
f1 <- ets(WWWusage)
f2 <- auto.arima(WWWusage)
accuracy(f1)
accuracy(f2)
dm.test(residuals(f1), residuals(f2), h = 1)
# Test on out-of-sample one-step forecasts
f1 <- ets(WWWusage[1:80])
f2 <- auto.arima(WWWusage[1:80])
f1.out <- ets(WWWusage[81:100], model = f1)
f2.out <- Arima(WWWusage[81:100], model = f2)
accuracy(f1.out)
accuracy(f2.out)
dm.test(residuals(f1.out), residuals(f2.out), h = 1)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.