deepar_fit_impl: GluonTS DeepAR Modeling Function (Bridge)

Description Usage Arguments

View source: R/parsnip-deepar.R

Description

GluonTS DeepAR Modeling Function (Bridge)

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
deepar_fit_impl(
  x,
  y,
  freq,
  prediction_length,
  id,
  epochs = 5,
  batch_size = 32,
  num_batches_per_epoch = 50,
  learning_rate = 0.001,
  learning_rate_decay_factor = 0.5,
  patience = 10,
  minimum_learning_rate = 5e-05,
  clip_gradient = 10,
  weight_decay = 1e-08,
  init = "xavier",
  ctx = NULL,
  hybridize = TRUE,
  context_length = NULL,
  num_layers = 2,
  num_cells = 40,
  cell_type = "lstm",
  dropout_rate = 0.1,
  use_feat_dynamic_real = FALSE,
  use_feat_static_cat = FALSE,
  use_feat_static_real = FALSE,
  cardinality = NULL,
  embedding_dimension = NULL,
  distr_output = "default",
  scaling = TRUE,
  lags_seq = NULL,
  time_features = NULL,
  num_parallel_samples = 100
)

Arguments

x

A dataframe of xreg (exogenous regressors)

y

A numeric vector of values to fit

freq

A pandas timeseries frequency such as "5min" for 5-minutes or "D" for daily. Refer to Pandas Offset Aliases.

prediction_length

Numeric value indicating the length of the prediction horizon

id

A quoted column name that tracks the GluonTS FieldName "item_id"

epochs

Number of epochs that the network will train (default: 5).

batch_size

Number of examples in each batch (default: 32).

num_batches_per_epoch

Number of batches at each epoch (default: 50).

learning_rate

Initial learning rate (default: 10-3 ).

learning_rate_decay_factor

Factor (between 0 and 1) by which to decrease the learning rate (default: 0.5).

patience

The patience to observe before reducing the learning rate, nonnegative integer (default: 10).

minimum_learning_rate

Lower bound for the learning rate (default: 5x10-5 ).

clip_gradient

Maximum value of gradient. The gradient is clipped if it is too large (default: 10).

weight_decay

The weight decay (or L2 regularization) coefficient. Modifies objective by adding a penalty for having large weights (default 10-8 ).

init

Initializer of the weights of the network (default: “xavier”).

ctx

The mxnet CPU/GPU context. Refer to using CPU/GPU in the mxnet documentation. (default: NULL, uses CPU)

hybridize

Increases efficiency by using symbolic programming. (default: TRUE)

context_length

Number of steps to unroll the RNN for before computing predictions (default: NULL, in which case context_length = prediction_length)

num_layers

Number of RNN layers (default: 2)

num_cells

Number of RNN cells for each layer (default: 40)

cell_type

Type of recurrent cells to use (available: 'lstm' or 'gru'; default: 'lstm')

dropout_rate

Dropout regularization parameter (default: 0.1)

use_feat_dynamic_real

Whether to use the 'feat_dynamic_real' field from the data (default: FALSE)

use_feat_static_cat

Whether to use the feat_static_cat field from the data (default: FALSE)

use_feat_static_real

Whether to use the feat_static_real field from the data (default: FALSE)

cardinality

Number of values of each categorical feature. This must be set if use_feat_static_cat == TRUE (default: NULL)

embedding_dimension

Dimension of the embeddings for categorical features (default: min(50, (cat+1)//2) for cat in cardinality)

distr_output

Distribution to use to evaluate observations and sample predictions (default: StudentTOutput())

scaling

Whether to automatically scale the target values (default: TRUE)

lags_seq

Indices of the lagged target values to use as inputs of the RNN (default: NULL, in which case these are automatically determined based on freq)

time_features

Time features to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq)

num_parallel_samples

Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)


modeltime.gluonts documentation built on Jan. 8, 2021, 2:23 a.m.