optimizer_adadelta | R Documentation |
Adadelta optimization is a stochastic gradient descent method that is based on adaptive learning rate per dimension to address two drawbacks:
The continual decay of learning rates throughout training.
The need for a manually selected global learning rate.
Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. Compared to Adagrad, in the original version of Adadelta you don't have to set an initial learning rate. In this version, the initial learning rate can be set, as in most other Keras optimizers.
optimizer_adadelta(
learning_rate = 0.001,
rho = 0.95,
epsilon = 1e-07,
weight_decay = NULL,
clipnorm = NULL,
clipvalue = NULL,
global_clipnorm = NULL,
use_ema = FALSE,
ema_momentum = 0.99,
ema_overwrite_frequency = NULL,
name = "adadelta",
...,
loss_scale_factor = NULL,
gradient_accumulation_steps = NULL
)
learning_rate |
A float, a
[ |
rho |
A floating point value. The decay rate. Defaults to |
epsilon |
Small floating point value for maintaining numerical stability. |
weight_decay |
Float. If set, weight decay is applied. |
clipnorm |
Float. If set, the gradient of each weight is individually clipped so that its norm is no higher than this value. |
clipvalue |
Float. If set, the gradient of each weight is clipped to be no higher than this value. |
global_clipnorm |
Float. If set, the gradient of all weights is clipped so that their global norm is no higher than this value. |
use_ema |
Boolean, defaults to |
ema_momentum |
Float, defaults to 0.99. Only used if |
ema_overwrite_frequency |
Int or |
name |
String. The name to use for momentum accumulator weights created by the optimizer. |
... |
For forward/backward compatability. |
loss_scale_factor |
Float or |
gradient_accumulation_steps |
Int or |
an Optimizer
instance
Other optimizers:
optimizer_adafactor()
optimizer_adagrad()
optimizer_adam()
optimizer_adam_w()
optimizer_adamax()
optimizer_ftrl()
optimizer_lamb()
optimizer_lion()
optimizer_loss_scale()
optimizer_nadam()
optimizer_rmsprop()
optimizer_sgd()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.