autograd_backward | R Documentation |
The graph is differentiated using the chain rule. If any of tensors are
non-scalar (i.e. their data has more than one element) and require gradient,
then the Jacobian-vector product would be computed, in this case the function
additionally requires specifying grad_tensors
. It should be a sequence of
matching length, that contains the “vector” in the Jacobian-vector product,
usually the gradient of the differentiated function w.r.t. corresponding
tensors (None is an acceptable value for all tensors that don’t need gradient
tensors).
autograd_backward(
tensors,
grad_tensors = NULL,
retain_graph = create_graph,
create_graph = FALSE
)
tensors |
(list of Tensor) – Tensors of which the derivative will be computed. |
grad_tensors |
(list of (Tensor or |
retain_graph |
(bool, optional) – If |
create_graph |
(bool, optional) – If |
This function accumulates gradients in the leaves - you might need to zero them before calling it.
if (torch_is_installed()) {
x <- torch_tensor(1, requires_grad = TRUE)
y <- 2 * x
a <- torch_tensor(1, requires_grad = TRUE)
b <- 3 * a
autograd_backward(list(y, b))
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.