BatchNorm: Batch Normalization

Description Super class Public fields Active bindings Methods Note Source See Also

Description

Avoid internal covariate shift by standardizing input values for each mini batch, meaning that the scale of the inputs remain the same regardless of how the weights in the previous layer changes.

Super class

neuralnetr::ClassModule -> BatchNorm

Public fields

eps

small constant avoid division by zero.

m

number of input channels

B

weights (nl × 1 vector)

G

weights (nl × 1 vector)

A

m x K: m input channels and mini-batch size K

K

mini-batch size.

mus

batch-wise means.

vars

batch-wise variances.

batch-wise

normalized inputs using input, mus, vars and eps.

Active bindings

batch-wise

normalized inputs using input, mus, vars and eps.

Methods

Public methods

Inherited methods

Method new()

Usage
BatchNorm$new(m, seed)

Method forward()

Usage
BatchNorm$forward(A)

Method backward()

Usage
BatchNorm$backward(dLdZ)

Method step()

Usage
BatchNorm$step(lrate)

Method clone()

The objects of this class are cloneable with this method.

Usage
BatchNorm$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Note

work in progress

Source

https://arxiv.org/abs/1502.03167 / MIT

See Also

Other architecture: Linear, Sequential


frhl/neuralnetr documentation built on Nov. 9, 2020, 2:24 p.m.