autograd_function | R Documentation |
Every operation performed on Tensor's creates a new function object, that
performs the computation, and records that it happened. The history is
retained in the form of a DAG of functions, with edges denoting data
dependencies (input <- output). Then, when backward is called, the graph is
processed in the topological ordering, by calling backward()
methods of each
Function object, and passing returned gradients on to next Function's.
autograd_function(forward, backward)
forward |
Performs the operation. It must accept a context |
backward |
Defines a formula for differentiating the operation. It must accept
a context |
if (torch_is_installed()) {
exp2 <- autograd_function(
forward = function(ctx, i) {
result <- i$exp()
ctx$save_for_backward(result = result)
result
},
backward = function(ctx, grad_output) {
list(i = grad_output * ctx$saved_variable$result)
}
)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.