Recurrent_GPD_net | R Documentation |
A recurrent neural network as a torch::nn_module
,
designed for generalized Pareto distribution parameter prediction, with sequential dependence.
Recurrent_GPD_net(
type = c("lstm", "gru"),
nb_input_features,
hidden_size,
num_layers = 1,
dropout = 0,
shape_fixed = FALSE,
device = EQRN::default_device()
)
type |
the type of recurrent architecture, can be one of |
nb_input_features |
the input size (i.e. the number of features), |
the dimension of the hidden latent state variables in the recurrent network, | |
num_layers |
the number of recurrent layers, |
dropout |
probability parameter for dropout before each hidden layer for regularization during training, |
shape_fixed |
whether the shape estimate depends on the covariates or not (bool), |
device |
a |
The constructor allows specifying:
the type of recurrent architecture, can be one of "lstm"
(default) or "gru"
,
the input size (i.e. the number of features),
the dimension of the hidden latent state variables in the recurrent network,
the number of recurrent layers,
probability parameter for dropout before each hidden layer for regularization during training,
whether the shape estimate depends on the covariates or not (bool),
a torch::torch_device()
for an internal constant vector. Defaults to default_device()
.
The specified recurrent GPD network as a torch::nn_module
.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.