Description Usage Arguments Value Examples
Lazy Adam
1 2 3 4 5 6 7 8 9 10 11 12 |
learning_rate |
A Tensor or a floating point value. or a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule The learning rate. |
beta_1 |
A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates. |
beta_2 |
A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates. |
epsilon |
A small constant for numerical stability. This epsilon is "epsilon hat" in Adam: A Method for Stochastic Optimization. Kingma et al., 2014 (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. |
amsgrad |
boolean. Whether to apply AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and beyond". Note that this argument is currently not supported and the argument can only be False. |
name |
Optional name for the operations created when applying gradients. Defaults to "LazyAdam". |
clipnorm |
is clip gradients by norm; |
clipvalue |
is clip gradients by value, |
decay |
is included for backward compatibility to allow time inverse decay of learning rate. |
lr |
is included for backward compatibility, recommended to use learning_rate instead. |
Optimizer for use with 'keras::compile()'
1 2 3 4 5 6 7 8 9 10 | ## Not run:
keras_model_sequential() %>%
layer_dense(32, input_shape = c(784)) %>%
compile(
optimizer = optimizer_lazy_adam(),
loss='binary_crossentropy',
metrics='accuracy'
)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.