DM.test: Computes Diebold-Mariano Test for the Equal Predictive...

View source: R/DM.test.R

DM.testR Documentation

Computes Diebold-Mariano Test for the Equal Predictive Accuracy.

Description

This function computes Diebold-Mariano test for the equal predictive accuracy. The null hypothesis of this test is that two forecasts have the same accuracy. The alternative hypothesis can be specified as ”Both forecasts have different accuracy”, ”The first forecast is less accurate than the second forecast”, or ”The first forecast is more accurate than the second forecast”.

Usage

DM.test(f1,f2,y,loss.type="SE",h,c=FALSE,H1="same")

Arguments

f1

vector of the first forecast

f2

vector of the second forecast

y

vector of the real values of the modelled time-series

loss.type

method to compute the loss function, loss.type="SE" will use squared errors, loss.type="AE" will use absolute errors, loss.type="SPE" will use squred proportional error (useful if errors are heteroskedastic), loss.type="ASE" will use absolute scaled error, if loss.type will be specified as some numeric, then the function of type exp(loss.type*errors)-1-loss.type*errors will be used (useful when it is more costly to underpredict y than to overpredict), if not specified loss.type="SE" is used

h

numeric dentoing that the forecast h-steps ahead are evaluated, if not specified h=1 is used

c

logical indicating if Harvey-Leybourne-Newbold correction for small samples should be used, if not specified c=FALSE is used

H1

alternative hypothesis, H1="same" for ”both forecasts have different accuracy”, H1="more" for ”the first forecast is more accurate than the second forecast”, H1="less" for ”the first forecast is less accurate than the second forecast”, if not specified H1="same" is taken

Value

class htest object, list of

statistic

test statistic

parameter

h, forecast horizon used

alternative

alternative hypothesis of the test

p.value

p-value

method

name of the test

data.name

names of the tested time-series

References

Diebold, F.X., Mariano, R. 1995. Comparing predictive accuracy. Journal of Business and Economic Statistics 13, 253–265.

Harvey, D., Leybourne, S., Newbold, P., 1997. Testing the equality of prediction mean squared errors. International Journal of Forecasting 13, 281–291.

Hyndman, R.J., Koehler, A.B. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting 22, 679–688.

Taylor, S. J., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.

Triacca, U., 2018. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.it/phd/documents/Lesson19.pdf.

Examples

data(MDMforecasts)
ts <- MDMforecasts$ts
forecasts <- MDMforecasts$forecasts
DM.test(f1=forecasts[,1],f2=forecasts[,2],y=ts,loss="SE",h=1,c=FALSE,H1="same")

multDM documentation built on June 9, 2022, 5:06 p.m.

Related to DM.test in multDM...