Man pages for ifrit98/transformR
An R Implementation of Transformer Model with Self Attention

add_residualAdd residual connection and project if feature dimensions do...
add_timing_signal_1dAdd timing signal to a tensor.
apply_normalizationApplies specified normalization type to input x
combine_headsInverse of split_heads.
combine_last_two_dimensionsReshape x so that the last two dimension become one.
compute_attention_componentantecedent: Tensor with shape [batch, length, channels]...
compute_bahdanau_scoreVanilla without norm
compute_luong_scoreVanilla without scale weight
compute_qkvquery [batch, length_q, channels] memory [batch, length_m,...
conv_relu_convHidden layer with RELU activation followed by linear...
create_qkvTakes input tensor of shape [batch, seqlen, channels] and...
dense_relu_denseHidden layer with RELU activation followed by linear...
dot-compute_attention_componentantecedent: Tensor with shape [batch, length, channels]...
dot_product_attention_1dInput query, key, and value matrices are used to compute dot...
embedding_to_paddingCalculates the padding mask based on which embeddings are all...
get_timing_signal_1dGets a timing signal for a given length and number of...
layer_add_residualAdds residual information to current output
layer_apply_normalizationApply normalization function to input tensor
layer_compute_qkvSplit input into query, key, value matrices in preparation...
layer_compute_qkv_v2Split input into query, key, value matrices in preparation...
layer_dense_relu_denseHidden layer with RELU activation followed by linear...
layer_dot_product_attention_1dInput query, key, and value matrices are used to compute dot...
layer_feed_forwardFeed forward layer for transformer encoder
layer_local_attention_1dStrided block local self-attention.
layer_multihead_attentionLambda layer implementation of multihead_attention
layer_normalizationR create_layer wrapper for keras LayerNormalization()
layer_postprocessPostprocess layer output by applying a sequence of functions
layer_prepost_processApply a sequence of functions to the input or output of a...
layer_preprocessPreprocess layer input by applying a sequence of functions
layer_self_attention_simpleSimplified Self attention layer Expecting shape(x) == (batch,...
local_attention_1dStrided block local self-attention.
multihead_attentionMultihead attention mechanism query [batch, seqlen, depth_q]...
reshape_by_blocksReshape input by splitting length over blocks of...
sepconv_relu_sepconvHidden layer with RELU activation followed by linear...
shape_listGrabs list of tensor dims statically, where possible.
shape_list2Can we cheat and call value on Dimension class object without...
split_headsSplit channels (dimension 2) into multiple heads (becomes...
split_last_dimensionReshape x so that the last dimension becomes two dimensions.
transformer_encoderDefine Transformer encoder function
transformer_encoder_v2Define Transformer encoder function
transformer_encoder_v3Define Transformer encoder function with lambda layer...
ifrit98/transformR documentation built on Nov. 26, 2019, 2:14 a.m.