Description Usage Arguments Value Author(s) See Also Examples
A predefined function that is used as a model in "ttgsea". This is a simple model, but you can define your own model. The loss function is "mean_squared_error" and the optimizer is "adam". Pearson correlation is used as a metric.
1 | bi_gru(num_tokens, embedding_dim, length_seq, num_units)
|
num_tokens |
maximum number of tokens |
embedding_dim |
a non-negative integer for dimension of the dense embedding |
length_seq |
length of input sequences, input length of "layer_embedding"" |
num_units |
dimensionality of the output space in the GRU layer |
model
Dongmin Jung
keras::keras_model_sequentia, keras::layer_embedding, keras::layer_gru, keras::bidirectional, keras::layer_dense, keras::compile
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | library(reticulate)
if (keras::is_keras_available() & reticulate::py_available()) {
num_tokens <- 1000
length_seq <- 30
embedding_dim <- 50
num_units <- 32
model <- bi_gru(num_tokens, embedding_dim, length_seq, num_units)
# stacked gru
num_units_1 <- 32
num_units_2 <- 16
stacked_gru <- function(num_tokens, embedding_dim, length_seq,
num_units_1, num_units_2)
{
model <- keras::keras_model_sequential() %>%
keras::layer_embedding(input_dim = num_tokens,
output_dim = embedding_dim,
input_length = length_seq,
mask_zero = TRUE) %>%
keras::layer_gru(units = num_units_1,
activation = "relu",
return_sequences = TRUE) %>%
keras::layer_gru(units = num_units_2,
activation = "relu") %>%
keras::layer_dense(1)
model %>%
keras::compile(loss = "mean_squared_error",
optimizer = "adam",
metrics = custom_metric("pearson_correlation",
metric_pearson_correlation))
}
}
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.