lagged_mlp | R Documentation |
General interface to MLP networks with lagged variables
lagged_mlp( mode = "regression", timesteps = NULL, horizon = 1, learn_rate = 0.01, epochs = 50, hidden_units = NULL, dropout = NULL, batch_size = 32, scale = TRUE, shuffle = FALSE, jump = 1, sample_frac = 1 )
mode |
( |
timesteps |
( |
horizon |
( |
learn_rate |
( |
epochs |
( |
hidden_units |
( |
dropout |
( |
batch_size |
( |
scale |
( |
shuffle |
( |
jump |
( |
sample_frac |
( |
This is a parsnip
API to the lagged feed-forward networks (aka MLP). For now the only
available engine is torchts_mlp
.
Categorical features are detected automatically - if a column of your input data (defined in the formula)
is logical
, character
, factor
or integer
.
Neural networks, unlike many other models (e.g. linear models) can return values before any training epoch ended. It's because every neural networks model starts with "random" parameters, which are gradually tuned in the following iterations according to the Gradient Descent algorithm.
If you'd like to get a non-trained model, simply set epochs = 0
.
You still have to "fit" the model to stick the standard parsnip
's API procedure.
library(torchts) library(parsnip) library(dplyr, warn.conflicts = FALSE) library(rsample) # Univariate time series tarnow_temp <- weather_pl %>% filter(station == "TARNÓW") %>% select(date, temp = tmax_daily) data_split <- initial_time_split(tarnow_temp) mlp_model <- lagged_mlp( timesteps = 20, horizon = 1, epochs = 10, hidden_units = 32 ) mlp_model <- mlp_model %>% fit(temp ~ date, data = training(data_split))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.