ForCompare: Generate Model Comparison Table

Description Usage Arguments Value Examples

Description

This model generates a comparison table that reports the MSE, Success Ratio, MSE Ratio, and some test statistics for a group of models being compared.

Usage

1
2
ForCompare(..., benchmark.index=NULL, test=c("unweighted", "binary"),
          h=1)

Arguments

...

one or more output from functions maeforecast, Metrics, and Bagging to generate the comparison table for.

benchmark.index

an integer indicating which model listed in ... should be treated as the benchmark. If omitted, the MSERatio and DMW will not be computed.

test

statistical test p values to be reported. Options include "weighted" (weighted directional forecast test) and "binary" (unweighted directional forecast test). See Directional_NW for details.

h

a numeric indicating the forecast horizon used in the models.

Value

This function returns a data frame with the following potential columns

MSE

the mean squared error of point forecasts for each model being compared.

SRatio

the success ratio of the directional forecasts for each model being compared.

MSERatio

the ratio of each model's MSE against that of a benchmark.

DMW

the p values returned from DMW tests against a benchmark indicated by benchmark.index. The null hypothesis is that the model being compared has the same forecast accuracy as the benchmark; the alternative hupothesis is that the model being compared is better than the benchmark.

Weighted

p value from weighted directional forecast test.

Unweighted

p value from unweighted directional forecast test.

Examples

1
2
3
4
5
AR.For<-maeforecast(mydata, w_size=72, window="recursive",
        model="ar")
Lasso.For<-maeforecast(mydata, w_size=72, window="recursive",
        model="lasso")
ForCompare(AR.For, Lasso.For)

google-trends-v1/gtm documentation built on June 5, 2019, 5:13 p.m.