activation_gelu: Gelu

Description Usage Arguments Details Value Computes gaussian error linear Examples

View source: R/activations.R

Description

Gaussian Error Linear Unit.

Usage

1
activation_gelu(x, approximate = TRUE)

Arguments

x

A 'Tensor'. Must be one of the following types: 'float16', 'float32', 'float64'.

approximate

bool, whether to enable approximation. Returns: A 'Tensor'. Has the same type as 'x'.

Details

Computes gaussian error linear: '0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))' or 'x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))', where P(X) ~ N(0, 1), depending on whether approximation is enabled. See [Gaussian Error Linear Units (GELUs)](https://arxiv.org/abs/1606.08415) and [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805).

Value

A 'Tensor'. Has the same type as 'x'.

Computes gaussian error linear

'0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3)))' or 'x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2)))', where P(X) ~ N(0, 1), depending on whether approximation is enabled.

Examples

1
2
3
4
5
6
7
8
## Not run: 
library(keras)
library(tfaddons)
model = keras_model_sequential() %>%
layer_conv_2d(filters = 10, kernel_size = c(3,3),input_shape = c(28,28,1),
              activation = activation_gelu)

## End(Not run)

tfaddons documentation built on July 2, 2020, 2:12 a.m.