madgrad: 'MADGRAD' Method for Stochastic Optimization

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <arxiv:2101.11075>.

Getting started

Package details

AuthorDaniel Falbel [aut, cre, cph], RStudio [cph], MADGRAD original implementation authors. [cph]
MaintainerDaniel Falbel <daniel@rstudio.com>
LicenseMIT + file LICENSE
Version0.1.0
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("madgrad")

Try the madgrad package in your browser

Any scripts or data that you put into this service are public.

madgrad documentation built on May 10, 2021, 9:08 a.m.