optimizer_adam: Optimizer that implements the Adam algorithm

View source: R/optimizers.R

optimizer_adamR Documentation

Optimizer that implements the Adam algorithm

Description

Optimizer that implements the Adam algorithm

Usage

optimizer_adam(
  learning_rate = 0.001,
  beta_1 = 0.9,
  beta_2 = 0.999,
  epsilon = 1e-07,
  amsgrad = FALSE,
  weight_decay = NULL,
  clipnorm = NULL,
  clipvalue = NULL,
  global_clipnorm = NULL,
  use_ema = FALSE,
  ema_momentum = 0.99,
  ema_overwrite_frequency = NULL,
  jit_compile = TRUE,
  name = "Adam",
  ...
)

Arguments

learning_rate

A tf.Tensor, floating point value, a schedule that is a tf.keras.optimizers.schedules.LearningRateSchedule, or a callable that takes no arguments and returns the actual value to use. The learning rate. Defaults to 0.001.

beta_1

A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use. The exponential decay rate for the 1st moment estimates. Defaults to 0.9.

beta_2

A float value or a constant float tensor, or a callable that takes no arguments and returns the actual value to use. The exponential decay rate for the 2nd moment estimates. Defaults to 0.999.

epsilon

A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper. Defaults to 1e-7.

amsgrad

Boolean. Whether to apply AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and beyond". Defaults to FALSE.

weight_decay

Float, defaults to NULL. If set, weight decay is applied.

clipnorm

Float. If set, the gradient of each weight is individually clipped so that its norm is no higher than this value.

clipvalue

Float. If set, the gradient of each weight is clipped to be no higher than this value.

global_clipnorm

Float. If set, the gradient of all weights is clipped so that their global norm is no higher than this value.

use_ema

Boolean, defaults to FALSE. If TRUE, exponential moving average (EMA) is applied. EMA consists of computing an exponential moving average of the weights of the model (as the weight values change after each training batch), and periodically overwriting the weights with their moving average.

ema_momentum

Float, defaults to 0.99. Only used if use_ema=TRUE. This is # noqa: E501 the momentum to use when computing the EMA of the model's weights: new_average = ema_momentum * old_average + (1 - ema_momentum) * current_variable_value.

ema_overwrite_frequency

Int or NULL, defaults to NULL. Only used if use_ema=TRUE. Every ema_overwrite_frequency steps of iterations, we overwrite the model variable by its moving average. If NULL, the optimizer # noqa: E501 does not overwrite model variables in the middle of training, and you need to explicitly overwrite the variables at the end of training by calling optimizer.finalize_variable_values() (which updates the model # noqa: E501 variables in-place). When using the built-in fit() training loop, this happens automatically after the last epoch, and you don't need to do anything.

jit_compile

Boolean, defaults to TRUE. If TRUE, the optimizer will use XLA # noqa: E501 compilation. If no GPU device is found, this flag will be ignored.

name

String. The name to use for momentum accumulator weights created by the optimizer.

...

Used for backward and forward compatibility

Details

Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments.

According to Kingma et al., 2014, the method is "computationally efficient, has little memory requirement, invariant to diagonal rescaling of gradients, and is well suited for problems that are large in terms of data/parameters".

Value

Optimizer for use with compile.keras.engine.training.Model.

See Also

Other optimizers: optimizer_adadelta(), optimizer_adagrad(), optimizer_adamax(), optimizer_ftrl(), optimizer_nadam(), optimizer_rmsprop(), optimizer_sgd()


keras documentation built on Dec. 28, 2022, 2:20 a.m.