optim_ignite_sgd | R Documentation |
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.
optim_ignite_sgd(
params,
lr = optim_required(),
momentum = 0,
dampening = 0,
weight_decay = 0,
nesterov = FALSE
)
params |
(iterable): iterable of parameters to optimize or dicts defining parameter groups |
lr |
(float): learning rate |
momentum |
(float, optional): momentum factor (default: 0) |
dampening |
(float, optional): dampening for momentum (default: 0) |
weight_decay |
(float, optional): weight decay (L2 penalty) (default: 0) |
nesterov |
(bool, optional): enables Nesterov momentum (default: FALSE) |
See OptimizerIgnite
.
if (torch_is_installed()) {
## Not run:
optimizer <- optim_ignite_sgd(model$parameters(), lr = 0.1)
optimizer$zero_grad()
loss_fn(model(input), target)$backward()
optimizer$step()
## End(Not run)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.