Batch Normalization Function that normalizes the input before applying non-linearity

Share:

Description

This function normalizes the distribution of inputs to hidden layers in a neural network

Usage

1
2
batch_normalization(x, gamma, beta, mu = NULL, sigma_2 = NULL,
  epsilon = exp(-12))

Arguments

x

weighted sum of outputs from the previous layer

gamma

the gamma coefficient

beta

the beta coefficient

mu

the mean of the input neurons. If NULL, it will be caluclated in the function.

sigma_2

the variance of the input nerurons. If NULL, it will be calcualted in the function.

epsilon

a constant added to the variance for numerical stability

References

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe, Christian Szegedy

See Also

http://jmlr.org/proceedings/papers/v37/ioffe15.pdf Pg 4

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.