modelCast  R Documentation 
Point forecasts and the respective forecasting intervals for trendstationary time series are calculated.
modelCast(
obj,
p = NULL,
q = NULL,
h = 1,
method = c("norm", "boot"),
alpha = 0.95,
it = 10000,
n.start = 1000,
pb = TRUE,
cores = future::availableCores(),
np.fcast = c("lin", "const"),
export.error = FALSE,
plot = FALSE,
...
)
obj 
an object of class 
p 
an integer value 
q 
an integer value 
h 
an integer that represents the forecasting horizon; if 
method 
a character object; defines the method used for the calculation
of the forecasting intervals; with 
alpha 
a numeric vector of length 1 with 
it 
an integer that represents the total number of iterations, i.e.,
the number of simulated series; is set to 
n.start 
an integer that defines the 'burnin' number
of observations for the simulated ARMA series via bootstrap; is set to

pb 
a logical value; for 
cores 
an integer value >0 that states the number of (logical) cores to
use in the bootstrap (or 
np.fcast 
a character object; defines the forecasting method used
for the nonparametric trend; for 
export.error 
a single logical value; if the argument is set to

plot 
a logical value that controls the graphical output; for

... 
additional arguments for the standard plot function, e.g.,

This function is part of the smoots package and was implemented under version 1.1.0. The point forecasts and forecasting intervals are obtained based on the additive nonparametric regression model
y_t = m(x_t) + \epsilon_t,
where y_t
is the observed time series with equidistant design,
x_t
is the rescaled time on the interval [0, 1]
,
m(x_t)
is a smooth trend function and
\epsilon_t
are stationary errors with
E(\epsilon_t) = 0
and
shortrange dependence (see also Beran and Feng, 2002). Thus, we assume
y_t
to be a trendstationary time series. Furthermore, we assume
that the rest term \epsilon_t
follows an ARMA(p,q
)
model
\epsilon_t = \zeta_t + \beta_1 \epsilon_{t1} + ... + \beta_p
\epsilon_{tp} + \alpha_1 \zeta_{t1} + ... +
\alpha_q \zeta_{tq},
where \alpha_j
, j = 1, 2, ..., q
, and
\beta_i
, i = 1, 2, ..., p
, are real numbers and
the random variables \zeta_t
are
i.i.d. (identically and independently distributed) with
zero mean and constant variance.
The point forecasts and forecasting intervals for the future periods
n + 1, n + 2, ..., n + h
will be obtained. With respect to the point
forecasts of \epsilon_t
, i.e.,
\hat{\epsilon}_{n+k}
, where
k = 1, 2, ..., h
,
\hat{\epsilon}_{n+k} = \sum_{i=1}^{p} \hat{\beta}_i \epsilon_{n+ki} +
\sum_{j=1}^{q} \hat{\alpha}_j \hat{\zeta}_{n+kj}
with \epsilon_{n+ki} = \hat{\epsilon}_{n+ki}
for n+ki > n
and
\hat{\zeta}_{n+kj} = E(\zeta_t) = 0
for n+kj > n
will be applied. In practice, this procedure will
not be applied directly to \epsilon_t
but to
y_t  \hat{m}(x_t)
.
The point forecasts of the nonparametric trend are simply obtained following the proposal by Fritz et al. (forthcoming) by
\hat{m}(x_{n+k}) = \hat{m}(x_n) + Dk(\hat{m}(x_n) 
\hat{m}(x_{n1})),
where D
is a dummy variable that is either equal to the constant value
1
or 0
. Consequently, if D = 0
,
\hat{m}(x_{n})
, i.e., the last trend estimate, is
used as a constant estimate for the future. However, if D = 1
, the
trend is extrapolated linearly. The point forecast for the whole component
model is then given by
\hat{y}_{n+k} = \hat{m}(x_{n+k}) + \hat{\epsilon}_{n+k},
i.e., it is equal to the sum of the point forecasts of the individual components.
Equivalently to the point forecasts, the forecasting intervals are the sum
of the forecasting intervals of the individual components. To simplify the
process, the forecasting error in \hat{m}(x_{n+k})
,
which is of order O(2/5)
, is not considered (see Fritz et al.
(forthcoming)), i.e., only the forecasting intervals with respect to the
rest term \epsilon_t
will be calculated.
If the distribution of the innovations is nonnormal or generally not further
specified, bootstrapping the forecasting intervals is recommended. If they
are however normally distributed or if it is at least assumed that they are,
the forecasting errors are also approximately normally distributed with a
quickly obtainable variance. For further details on the bootstrapping
method, we refer the readers to bootCast
, whereas more
information on the calculation under normality can be found at
normCast
.
In order to apply the function, a smoots
object that was generated as
the result of a trend estimation process needs to be passed to the argument
obj
. The arguments p
and q
represent the orders of the
of the ARMA(p,q
) model that the error term
\epsilon_t
is assumed to follow. If both arguments are
set to NULL
, which is the default setting, orders will be selected
according to the Bayesian Information Criterion (BIC) for all possible
combinations of p,q = 0, 1, ..., 5
. Furthermore, the forecasting
horizon can be adjusted by means of the argument h
, so that point
forecasts and forecasting intervals will be obtained for all time points
n + 1, n + 2, ..., n + h
.
The function also allows for two calculation approaches for the forecasting
intervals. Via the argument method
, intervals
can be obtained under the assumption that the ARMA innovations are normally
distributed (method = "norm"
). Alternatively, bootstrapped intervals
can be obtained for unknown innovation distributions that are clearly
nonGaussian (method = "boot"
).
Another argument is alpha
. By passing a value
to this argument, the (100
alpha
)percent confidence level for
the forecasting intervals can be defined. If method = "boot"
is
selected, the additional arguments it
and n.start
can be
adjusted. More specifically, it
regulates the number of iterations of
the bootstrap, whereas n.start
sets the number of 'burnin'
observations in the simulated ARMA processes within the bootstrap that are
omitted.
Since this bootstrap approach for method = "boot"
generally needs a
lot of computation time, especially for
series with high numbers of observations and when fitting models with many
parameters, parallel computation of the bootstrap iterations is enabled.
With cores
, the number of cores can be defined with an integer.
Nonetheless, for cores = NULL
, no cluster is created and therefore
the parallel computation is disabled. Note that the bootstrapped results are
fully reproducible for all cluster sizes. The progress of the bootstrap can
be observed in the R console, where a progress bar and the estimated
remaining time are displayed for pb = TRUE
.
Moreover, the argument np.fcast
allows to set the forecasting method
for the nonparametric trend function. As previously discussed, the two
options are a linear extrapolation of the trend (np.fcast = "lin"
) and
a constant continuation of the last estimated value of the trend
(np.fcast = "const"
).
The function also implements the option to automatically create a plot of
the forecasting results for plot = TRUE
. This includes the feature
to pass additional arguments of the standard plot function to
modelCast
(see also the section 'Examples').
NOTE:
Within this function, the arima
function of the
stats
package with its method "CSSML"
is used throughout
for the estimation of ARMA models. Furthermore, to increase the performance,
C++ code via the Rcpp
and
RcppArmadillo
packages was
implemented. Also, the future
and
future.apply
packages are
considered for parallel computation of bootstrap iterations. The progress
of the bootstrap is shown via the
progressr
package.
The function returns a 3
by h
matrix with its columns
representing the future time points and the point forecasts, the lower
bounds of the forecasting intervals and the upper bounds of the
forecasting intervals as the rows. If the argument plot
is set to
TRUE
, a plot of the forecasting results is created.
#'If export.error = TRUE
is selected, a list with the following
elements is returned instead.
the 3
by h
forecasting matrix with point forecasts
and bounds of the forecasting intervals.
an it
by h
matrix, where each column
represents a future time point n + 1, n + 2, ..., n + h
; in each column
the respective it
simulated forecasting errors are saved.
Yuanhua Feng (Department of Economics, Paderborn University),
Author of the Algorithms
Website: https://wiwi.unipaderborn.de/en/dep4/feng/
Dominik Schulz (Research Assistant) (Department of Economics, Paderborn
University),
Package Creator and Maintainer
Beran, J. and Feng, Y. (2002). Local polynomial fitting with longmemory, shortmemory and antipersistent errors. Annals of the Institute of Statistical Mathematics, 54(2), 291311.
Feng, Y., Gries, T. and Fritz, M. (2020). Datadriven local polynomial for the trend and its derivatives in economic time series. Journal of Nonparametric Statistics, 32:2, 510533.
Feng, Y., Gries, T., Letmathe, S. and Schulz, D. (2019). The smoots package in R for semiparametric modeling of trend stationary time series. Discussion Paper. Paderborn University. Unpublished.
Feng, Y., Gries, T., Fritz, M., Letmathe, S. and Schulz, D. (2020). Diagnosing the trend and bootstrapping the forecasting intervals using a semiparametric ARMA. Discussion Paper. Paderborn University. Unpublished.
Fritz, M., Forstinger, S., Feng, Y., and Gries, T. (forthcoming). Forecasting economic growth processes for developing economies. Unpublished.
X < log(smoots::gdpUS$GDP)
NPest < smoots::msmooth(X)
modelCast(NPest, h = 5, plot = TRUE, xlim = c(261, 295), type = "b",
col = "deepskyblue4", lty = 3, pch = 20, main = "Exemplary title")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.