optim_radam | R Documentation |
R implementation of the RAdam optimizer proposed by Liu et al. (2019). We used the implementation in PyTorch as a basis for our implementation.
From the abstract by the paper by Liu et al. (2019): The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Here, we study its mechanism in details. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate (i.e., it has problematically large variance in the early stage), suggest warmup works as a variance reduction technique, and provide both empirical and theoretical evidence to verify our hypothesis. We further propose RAdam, a new variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Extensive experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the effectiveness and robustness of our proposed method.
optim_radam(
params,
lr = 0.01,
betas = c(0.9, 0.999),
eps = 1e-08,
weight_decay = 0
)
params |
List of parameters to optimize. |
lr |
Learning rate (default: 1e-3) |
betas |
Coefficients computing running averages of gradient and its square (default: (0.9, 0.999)) |
eps |
Term added to the denominator to improve numerical stability (default: 1e-8) |
weight_decay |
Weight decay (L2 penalty) (default: 0) |
A torch optimizer object implementing the step
method.
Gilberto Camara, gilberto.camara@inpe.br
Daniel Falbel, daniel.falble@gmail.com
Rolf Simoes, rolf.simoes@inpe.br
Felipe Souza, lipecaso@gmail.com
Alber Sanchez, alber.ipia@inpe.br
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han, "On the Variance of the Adaptive Learning Rate and Beyond", International Conference on Learning Representations (ICLR) 2020. https://arxiv.org/abs/1908.03265
if (torch::torch_is_installed()) {
# function to demonstrate optimization
beale <- function(x, y) {
log((1.5 - x + x * y)^2 + (2.25 - x - x * y^2)^2 + (2.625 - x + x * y^3)^2)
}
# define optimizer
optim <- torchopt::optim_radam
# define hyperparams
opt_hparams <- list(lr = 0.01)
# starting point
x0 <- 3
y0 <- 3
# create tensor
x <- torch::torch_tensor(x0, requires_grad = TRUE)
y <- torch::torch_tensor(y0, requires_grad = TRUE)
# instantiate optimizer
optim <- do.call(optim, c(list(params = list(x, y)), opt_hparams))
# run optimizer
steps <- 400
x_steps <- numeric(steps)
y_steps <- numeric(steps)
for (i in seq_len(steps)) {
x_steps[i] <- as.numeric(x)
y_steps[i] <- as.numeric(y)
optim$zero_grad()
z <- beale(x, y)
z$backward()
optim$step()
}
print(paste0("starting value = ", beale(x0, y0)))
print(paste0("final value = ", beale(x_steps[steps], y_steps[steps])))
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.