add_residual | Add residual connection and project if feature dimensions do... |
add_timing_signal_1d | Add timing signal to a tensor. |
apply_normalization | Applies specified normalization type to input x |
combine_heads | Inverse of split_heads. |
combine_last_two_dimensions | Reshape x so that the last two dimension become one. |
compute_attention_component | antecedent: Tensor with shape [batch, length, channels]... |
compute_bahdanau_score | Vanilla without norm |
compute_luong_score | Vanilla without scale weight |
compute_qkv | query [batch, length_q, channels] memory [batch, length_m,... |
conv_relu_conv | Hidden layer with RELU activation followed by linear... |
create_qkv | Takes input tensor of shape [batch, seqlen, channels] and... |
dense_relu_dense | Hidden layer with RELU activation followed by linear... |
dot-compute_attention_component | antecedent: Tensor with shape [batch, length, channels]... |
dot_product_attention_1d | Input query, key, and value matrices are used to compute dot... |
embedding_to_padding | Calculates the padding mask based on which embeddings are all... |
get_timing_signal_1d | Gets a timing signal for a given length and number of... |
layer_add_residual | Adds residual information to current output |
layer_apply_normalization | Apply normalization function to input tensor |
layer_compute_qkv | Split input into query, key, value matrices in preparation... |
layer_compute_qkv_v2 | Split input into query, key, value matrices in preparation... |
layer_dense_relu_dense | Hidden layer with RELU activation followed by linear... |
layer_dot_product_attention_1d | Input query, key, and value matrices are used to compute dot... |
layer_feed_forward | Feed forward layer for transformer encoder |
layer_local_attention_1d | Strided block local self-attention. |
layer_multihead_attention | Lambda layer implementation of multihead_attention |
layer_normalization | R create_layer wrapper for keras LayerNormalization() |
layer_postprocess | Postprocess layer output by applying a sequence of functions |
layer_prepost_process | Apply a sequence of functions to the input or output of a... |
layer_preprocess | Preprocess layer input by applying a sequence of functions |
layer_self_attention_simple | Simplified Self attention layer Expecting shape(x) == (batch,... |
local_attention_1d | Strided block local self-attention. |
multihead_attention | Multihead attention mechanism query [batch, seqlen, depth_q]... |
reshape_by_blocks | Reshape input by splitting length over blocks of... |
sepconv_relu_sepconv | Hidden layer with RELU activation followed by linear... |
shape_list | Grabs list of tensor dims statically, where possible. |
shape_list2 | Can we cheat and call value on Dimension class object without... |
split_heads | Split channels (dimension 2) into multiple heads (becomes... |
split_last_dimension | Reshape x so that the last dimension becomes two dimensions. |
transformer_encoder | Define Transformer encoder function |
transformer_encoder_v2 | Define Transformer encoder function |
transformer_encoder_v3 | Define Transformer encoder function with lambda layer... |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.