dmw: Diebold-Mariano-West out-of-sample t-test

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

The Diebold-Mariano-West oos t-test can be used to compare population forecasting models under some fairly restrictive circumstances (see West, 2006). The forecast are assumed to be constructed using a fixed, recursive, or rolling estimation window and depend on the estimated coefficients \hatβ_t. The function dmw_calculation takes as arguments the matrices and vectors that West (1996) and West and McCracken (1998) use to represent the asymptotic distribution of this statistic and just assembles the mean and variance components of the statistic. dmw_mse is a basic convenience wrapper for the common use case: squared error loss with least squares forecasts. The mixedwindow functions implement the asymptotically normal OOS test statistics proposed by Calhoun (2011).

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
dmw_mse(null, alt, dataset, R, vcv = var,
        window = c("recursive", "rolling", "fixed"))

dmw_calculation(f, h, R, vcv, tBtF = NULL, pi = noos / R,
                window = c("recursive", "rolling", "fixed"))

mixedwindow(null, alt, dataset, R, vcv = var,
            window = c("rolling", "fixed"), pimethod = "estimate")

mixedbootstrap(null, alt.list, dataset, R, nboot, blocklength,
               vcv = var, window = c("rolling", "fixed"),
               bootstrap = c("moving", "circular", "stationary"),
               pimethod = "estimate")

Arguments

null

A function that takes a subset of the data dataset as its argument and returns an object with a predict method. This function generates the benchmark forecast.

alt

A second function that takes a subset of the data dataset as its argument and returns an object with a predict method. This function generates the alternative forecast.

alt.list

A list of functions that would be valid as alt

dataset

A data frame.

R

An integer, the size of the training sample. The asymptotic theory assumes that R is small.

f

A vector containing the oos observations

h

A matrix containing something like (for ols using the obvious notation) x_t \varepsilon_t for t ranging over the oos period.

tBtF

A vector that represents B'F' in West's (1996) notation. This term captures the uncertainty introduced by estimating the unknown model coefficients; if the coefficients are known or imposed, instead of estimated, set this argument to NULL

pi

A numeric scalar, the ratio of the number of out-of-sample observations to the number of training sample observations. noos is defined in the body of the function as length(f).

window

A character string indicating which window strategy was used to generate the oos observations. For the mixedwindow functions, this is the window strategy for oos estimation for the alternative model[s] since the benchmark model is always estimated with the recursive scheme.

nboot

An integer, the number of bootstrap replications.

blocklength

An integer, the length of the blocks for the moving or circular block bootstraps.

vcv

A function to calculate the asymptotic variance of the oos average.

pimethod

Indicates whether Pi (= lim P/R) should be estimated as P/R (pimethod = "estimate") or set to the theoretical limit of infinity (pimethod = "theory").

bootstrap

Indicates whether to do the moving blocks bootstrap (MBB) (Kunsch, 1989 and Liu and Singh, 1992), circular blocks bootstrap (CBB) (Politis and Romano, 1992), or stationary bootstrap (Politis and Romano, 1994)

Details

Calhoun's (2011) mixed window oos test is a modification of Clark and West's (2006, 2007) that uses a recursive window for the benchmark model to ensure that the oos average is mean zero and asymptotically normal. mixedwindow compares a pair of models and mixedbootstrap implements the bootstrap used for multiple comparisons.

Value

dmw_mse and dmw_calculation each return a list containing the following elements:

mu

The oos average,

avar

The asymptotic variance of the oos average.

mixedwindow returns a list with the following elements:

mu

The estimated oos average, which includes the adjustment for correct asymptotic centering

avar

An estimate of the asymptotic variance of the oos average

pvalue

The p-value of the test that the two models have equal population mse against the one-sided alternative that the alternative model is more accurate.

mixedbootstrap returns an length(alt.list) by nboot matrix that contains the resampled values of the oos t-test based on mixedwindow. These are the values of the t-statistic and not the test's p-values.

Author(s)

Gray Calhoun gcalhoun@iastate.edu

References

Calhoun, G. 2011, An asymptotically normal out-of-sample test of equal predictive accuracy for nested models. Unpublished manuscript.

Calhoun, G. 2011, Supplemental appendix: An asymptotically normal out-of-sample test of equal predictive accuracy for nested models. Unpublished manuscript.

Clark, T. E., West, K. D. 2006, Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis. Journal of Econometrics, 135(1): 155–186.

Clark, T. E., West, K. D. 2007, Approximately normal tests for equal predictive accuracy in nested models. Journal of Econometrics, 138(1): 291–311.

Diebold, F. X. and Mariano, R. S. 1995, Comparing predictive accuracy. Journal of Business and Economic Statistics, 138(1): 253–263.

Kunsch, H. R. 1989, The Jackknife and the Bootstrap for general stationary observations. Annals of Statistics, 17(3), pages 1217–1241.

Liu, R. Y. and Kesar, S. 1992, Moving blocks Jackknife and Bootstrap capture weak dependence, in R. LePage and L. Billard, editors, Exploring the limits of Bootstrap, John Wiley, pages 225–248.

Politis, D. N. and Romano, J. P. 1992, A circular block-resampling procedure for stationary data, in R. LePage and L. Billard, editors, Exploring the limits of Bootstrap, John Wiley, pages 263–270.

Politis, D. N. and Romano, J. P. 1994, The Stationary Bootstrap. Journal of the American Statistical Association, 89(428), pages 1303-1313.

West, K. D. 1996, Asymptotic inference about predictive ability. Econometrica, 64(5): 1067–1084.

West, K. D. 2006, Forecast evaluation, in G. Elliott, C. Granger, and A. Timmermann, editors, Handbook of Economic Forecasting, volume 1, pages 99–134. Elsevier.

West, K. D. and McCracken, M. W. 1998, Regression-based tests of predicitve ability. International Economic Review, 39(4):817–840.

See Also

clarkwest, mccracken_criticalvalue, recursive_forecasts, predict, boot

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
x <- rnorm(100)
d <- data.frame(y = x + rnorm(100), x = x)
R <- 70
oos <- 71:100

error.model1 <- d$y[oos] - predict(lm(y ~ 1, data = d[-oos,]),
                                   newdata = d[oos,])
error.model2 <- d$y[oos] - predict(lm(y ~ x, data = d[-oos,]),
                                   newdata = d[oos,])
# test that the two models have equal population MSE.  Note that F = 0
# in this setting.
estimates <-
  dmw_calculation(error.model1^2 - error.model2^2,
                  cbind(error.model1, error.model2, error.model2 * x),
                  R = R, vcv = var)
# calculate p-value for a one-sided test
pnorm(estimates$mu * sqrt(length(oos) / estimates$avar))


n <- 30
R <- 5
d <- data.frame(y = rnorm(n), x1 = rnorm(n), x2 = rnorm(n))
model0 <- function(d) lm(y ~ 1, data = d)
model1 <- function(d) lm(y ~ x1, data = d)
model2 <- function(d) lm(y ~ x2, data = d)
model3 <- function(d) lm(y ~ x1 + x2, data = d)

mixedwindow(model0, model1, d, R, var, window = "rolling")

mixedbootstrap(model0, list(m1 = model1, m2 = model2, m3 = model3),
               d, R, 199, 7, var, "fixed", "circular")

grayclhn/oosanalysis-R-library documentation built on May 17, 2019, 8:33 a.m.