Description Super class Public fields Active bindings Methods Note Source See Also
Avoid internal covariate shift by standardizing input values for each mini batch, meaning that the scale of the inputs remain the same regardless of how the weights in the previous layer changes.
neuralnetr::ClassModule -> BatchNorm
epssmall constant avoid division by zero.
mnumber of input channels
Bweights (nl × 1 vector)
Gweights (nl × 1 vector)
Am x K: m input channels and mini-batch size K
Kmini-batch size.
musbatch-wise means.
varsbatch-wise variances.
batch-wisenormalized inputs using input, mus, vars and eps.
batch-wisenormalized inputs using input, mus, vars and eps.
new()BatchNorm$new(m, seed)
forward()BatchNorm$forward(A)
backward()BatchNorm$backward(dLdZ)
step()BatchNorm$step(lrate)
clone()The objects of this class are cloneable with this method.
BatchNorm$clone(deep = FALSE)
deepWhether to make a deep clone.
work in progress
https://arxiv.org/abs/1502.03167 / MIT
Other architecture:
Linear,
Sequential
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.