README.md

torchopt

R-CMD-check CRAN
status Software Life
Cycle Software
License

The torchopt package provides R implementation of deep learning optimizers proposed in the literature. It is intended to support the use of the torch package in R.

Installation

Installing the CRAN (stable) version of torchopt:

install.packages("torchopt")

Installing the development version of torchopt do as :

library(devtools)
install_github("e-sensing/torchopt")
#> Warning: package 'torch' was built under R version 4.1.3

Provided optimizers

torchopt package provides the following R implementations of torch optimizers:

Optimization test functions

You can also test optimizers using optimization test functions provided by torchopt including "ackley", "beale", "booth", "bukin_n6", "easom", "goldstein_price", "himmelblau", "levi_n13", "matyas", "rastrigin", "rosenbrock", "sphere". Optimization functions are useful to evaluate characteristics of optimization algorithms, such as convergence rate, precision, robustness, and performance. These functions give an idea about the different situations that optimization algorithms can face.

In what follows, we perform tests using "beale" test function. To visualize an animated GIF, we set plot_each_step=TRUE and capture each step frame using gifski package.

optim_adamw():

# test optim adamw
set.seed(12345)
torchopt::test_optim(
    optim = torchopt::optim_adamw,
    test_fn = "beale",
    opt_hparams = list(lr = 0.1),
    steps = 500,
    plot_each_step = TRUE
)

optim_adabelief():

set.seed(42)
test_optim(
    optim = optim_adabelief,
    opt_hparams = list(lr = 0.5),
    steps = 400,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_adabound():

# set manual seed
set.seed(22)
test_optim(
    optim = optim_adabound,
    opt_hparams = list(lr = 0.5),
    steps = 400,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_adahessian():

# set manual seed
set.seed(290356)
test_optim(
    optim = optim_adahessian,
    opt_hparams = list(lr = 0.2),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_madgrad():

set.seed(256)
test_optim(
    optim = optim_madgrad,
    opt_hparams = list(lr = 0.05),
    steps = 400,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_nadam():

set.seed(2903)
test_optim(
    optim = optim_nadam,
    opt_hparams = list(lr = 0.5, weight_decay = 0),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_qhadam():

set.seed(1024)
test_optim(
    optim = optim_qhadam,
    opt_hparams = list(lr = 0.1),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_radam():

set.seed(1024)
test_optim(
    optim = optim_radam,
    opt_hparams = list(lr = 1.0),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_swats():

set.seed(234)
test_optim(
    optim = optim_swats,
    opt_hparams = list(lr = 0.5),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

optim_yogi():

# set manual seed
set.seed(66)
test_optim(
    optim = optim_yogi,
    opt_hparams = list(lr = 0.1),
    steps = 500,
    test_fn = "beale",
    plot_each_step = TRUE
)

Acknowledgements

We are thankful to Collin Donahue-Oponski https://github.com/colllin, Amir Gholami https://github.com/amirgholami, Liangchen Luo https://github.com/Luolc, Liyuan Liu https://github.com/LiyuanLucasLiu, Nikolay Novik https://github.com/jettify, Patrik Purgai https://github.com/Mrpatekful Juntang Zhuang https://github.com/juntang-zhuang and the PyTorch team https://github.com/pytorch/pytorch for providing pytorch code for the optimizers implemented in this package. We also thank Daniel Falbel https://github.com/dfalbel for providing support for the R version of PyTorch.

Code of Conduct

The torchopt project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

References



e-sensing/torchopt documentation built on July 7, 2023, 8:05 p.m.