Description Usage Arguments Value
Word Embedding + Deep Multilayer Perceptron
1 2 | keras_deep_mlp(input_dim, embed_dim = 64, seq_len, hidden_dims = c(256,
128, 64), hidden_fun = "relu", output_fun = "softmax", output_dim)
|
input_dim |
Number of unique vocabluary/tokens |
embed_dim |
Number of word vectors |
seq_len |
Length of the input sequences |
hidden_dims |
Number of neurons per layer as vector of integers c(256, 128, 64) |
hidden_fun |
Hidden activation function ("relu" by default) |
output_fun |
Output activation function |
output_dim |
Number of neurons of the output layer |
keras model
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.