Description Usage Arguments Details Value
Word embedding + Deep (bidirectional) long short-term memory
1 2 3 | keras_deep_lstm(input_dim, embed_dim = 128, seq_len = 50,
hidden_dims = c(128, 64, 32), bidirectional = F,
output_fun = "softmax", output_dim = 2)
|
input_dim |
Number of unique vocabluary/tokens |
embed_dim |
Number of word vectors |
seq_len |
Length of the input sequences |
hidden_dims |
Number of neurons per layer as vector of integers c(256, 128, 64) |
bidirectional |
default is F |
output_fun |
Output activation function |
output_dim |
Number of neurons of the output layer |
Stacking lstm modules of different size.
keras model
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.