Description Super class Methods See Also
A linear moduls implements a linear transformation:
(latex code)
specified by a weight matrix w and a bias vector w0. Each linear module has a forward method that takes in a batch of activations A (from the previous layer) and returns a batch of pre-activations Z.
Each linear module has a backward method that takes in dLdZ and returns dLdA. This module also computes and stores dLdW and dLdW0, the gradients with respect to the weights.
neuralnetr::ClassModule -> Linear
new()initialize the weights.
Linear$new(m, n)
mthe m dimension of the module.
nthe n dimension of the module.
forward()do one step forward.
Linear$forward(A)
Ainput activation (m x b)
Z pre-activation (n x b)
backward()do one gradient step backward
Linear$backward(dLdZ)
dLdZthe derivative of the loss with respect to Z (n x b)
dLdA (m x b)
sgd_step()update the weights using stochastic gradient descent
Linear$sgd_step(lrate)
lratelearning rate
clone()The objects of this class are cloneable with this method.
Linear$clone(deep = FALSE)
deepWhether to make a deep clone.
Other architecture:
BatchNorm,
Sequential
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.