tts.DeepLearning: It applies the h2o.deeplearning of 'h2o' to time series data

View source: R/tts_MLP.R

tts.DeepLearningR Documentation

It applies the h2o.deeplearning of h2o to time series data

Description

It applies the deep learning rountine, specifically the Multilayer Perceptron(MLP), of H2o.ai to time series data.

Usage

tts.DeepLearning(y,x=NULL,train.end,arOrder=2,xregOrder=0,type,initial=TRUE)

Arguments

y

The time series object of the target variable, for example, timeSeries,xts, or zoo. Numerically,y must be real numbers for regression or integers for classification. Date format must be "

x

The time series matrix of input variables, timestamp is the same as y, maybe null.

train.end

The end date of training data, must be specificed. The default dates of train.start and test.end are the start and the end of input data; and the test.start is the 1-period next of train.end.

arOrder

The autoregressive order of the target variable, which may be sequentially specifed like arOrder=1:5; or discontinuous lags like arOrder=c(1,3,5); zero is not allowed.

xregOrder

The distributed lag structure of the input variables, which may be sequentially specifed like xregOrder=1:5; or discontinuous lags like xregOrder=c(0,3,5); zero is allowed since contemporaneous correlation is allowed.

type

The time dummies variables. We have four selection:
'none'=no other variables,
'trend'=inclusion of time dummy,
'season'=inclusion of seasonal dummies,
'both'=inclusion of both trend and season. No default.

initial

Whether to initialize h2o.init(). Default to "TRUE" and, to avoid multiple initiations, users had better change it to FALSE while training via rolling windows. See example below.

Details

This function calls the h2o.deeplearning function from package h2o to execute multilayer percentron learning.

Value

output

Output object generated by h2o.automl function of h2o.

arOrder

The autoregressive order of the target variable used.

dataused

The data used by arOrder, xregOrder

data

The complete data structure

TD

Time dummies used, inherited from 'type' in tts.DeepLearning

train.end

The same as the argument in tts.caret

Author(s)

Ho Tsung-wu <tsungwu@ntnu.edu.tw>, College of Management, National Taiwan Normal University.

Examples

# Computation takes time, example below is commented.
data("macrodata")
dep<-macrodata[,"unrate",drop=FALSE]
ind<-macrodata[,-1,drop=FALSE]

# Choosing the dates of training and testing data
train.end<-"2008-12-01"

#Must execute the commands below
#h2o::h2o.init()        # Initialize h2o
#invisible(h2o::h2o.no_progress()) # Turn off progress bars

# out <- tts.DeepLearning(y=dep, x=ind, train.end,arOrder=c(2,4),
# xregOrder=c(0,1,3),type="both",initial=FALSE)

#testData2 <- window(out$dataused,start="2009-01-01",end=end(out$dataused))
#P1<-iForecast(Model=out,Type="static",newdata=testData2)
#P2<-iForecast(Model=out,Type="dynamic",n.ahead=nrow(testData2))

#tail(cbind(testData2[,1],P1))
#tail(cbind(testData2[,1],P2))

#h2o::h2o.shutdown(promp=FALSE) # Remember to shutdown h2o when all works are finished.

iForecast documentation built on June 28, 2025, 5:06 p.m.