optim_adadelta: Adadelta optimizer

optim_adadeltaR Documentation

Adadelta optimizer

Description

It has been proposed in ADADELTA: An Adaptive Learning Rate Method

Usage

optim_adadelta(params, lr = 1, rho = 0.9, eps = 1e-06, weight_decay = 0)

Arguments

params

(iterable): list of parameters to optimize or list defining parameter groups

lr

(float, optional): learning rate (default: 1e-3)

rho

(float, optional): coefficient used for computing a running average of squared gradients (default: 0.9)

eps

(float, optional): term added to the denominator to improve numerical stability (default: 1e-6)

weight_decay

(float, optional): weight decay (L2 penalty) (default: 0)

Warning

If you need to move a model to GPU via ⁠$cuda()⁠, please do so before constructing optimizers for it. Parameters of a model after ⁠$cuda()⁠ will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.

Note

According to the original paper, decaying average of the squared gradients is computed as follows:

E[g^2]_{t} = \rho E[g^2]_{t- 1} + (1 - \rho){g_{t}}^2

RMS of previous squared gradients up to time t:

RMS[g_{t}] = \sqrt{E[g^2]_{t} + \epsilon }

Adadelta update rule:

\begin{array}{ll} \Delta \theta_{t} = - \frac{RMS [\Delta \theta]_{t - 1} }{RMS[g]_{t}} \theta_{t+1} = \theta_{t} + \Delta \theta_{t} \end{array}

Examples

if (torch_is_installed()) {
## Not run: 
optimizer <- optim_adadelta(model$parameters, lr = 0.1)
optimizer$zero_grad()
loss_fn(model(input), target)$backward()
optimizer$step()

## End(Not run)
}

torch documentation built on May 29, 2024, 9:54 a.m.