Description Super class Public fields Active bindings Methods Note Source See Also
Avoid internal covariate shift by standardizing input values for each mini batch, meaning that the scale of the inputs remain the same regardless of how the weights in the previous layer changes.
neuralnetr::ClassModule
-> BatchNorm
eps
small constant avoid division by zero.
m
number of input channels
B
weights (nl × 1 vector)
G
weights (nl × 1 vector)
A
m x K: m input channels and mini-batch size K
K
mini-batch size.
mus
batch-wise means.
vars
batch-wise variances.
batch-wise
normalized inputs using input, mus, vars and eps.
batch-wise
normalized inputs using input, mus, vars and eps.
new()
BatchNorm$new(m, seed)
forward()
BatchNorm$forward(A)
backward()
BatchNorm$backward(dLdZ)
step()
BatchNorm$step(lrate)
clone()
The objects of this class are cloneable with this method.
BatchNorm$clone(deep = FALSE)
deep
Whether to make a deep clone.
work in progress
https://arxiv.org/abs/1502.03167 / MIT
Other architecture:
Linear
,
Sequential
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.