layer_efficient_attention_2d: Efficient attention layer (2-D)

View source: R/attentionUtilities.R

layer_efficient_attention_2dR Documentation

Efficient attention layer (2-D)

Description

Wraps the EfficientAttentionLayer2D modified from the following python implementation

Usage

layer_efficient_attention_2d(
  object,
  numberOfFiltersFG = 4L,
  numberOfFiltersH = 8L,
  kernelSize = 1L,
  poolSize = 2L,
  doConcatenateFinalLayers = FALSE,
  trainable = TRUE
)

Arguments

object

Object to compose layer with. This is either a keras::keras_model_sequential to add the layer to or another Layer which this layer will call.

numberOfFiltersFG

number of filters for F and G layers.

numberOfFiltersH

number of filters for H. If = NA, only use filter F for efficiency.

kernelSize

kernel size in convolution layer.

poolSize

pool size in max pool layer.

doConcatenateFinalLayers

concatenate final layer with input. Alternatively, add. Default = FALSE

Details

https://github.com/taki0112/Self-Attention-GAN-Tensorflow

based on the following paper:

https://arxiv.org/abs/1805.08318

Value

a keras layer tensor

Examples


## Not run: 
library( keras )
library( ANTsRNet )

inputShape <- c( 100, 100, 3 )
input <- layer_input( shape = inputShape )

numberOfFiltersFG <- 64L
outputs <- input %>% layer_efficient_attention_2d( numberOfFiltersFG )

model <- keras_model( inputs = input, outputs = outputs )

## End(Not run)


ANTsX/ANTsRNet documentation built on April 28, 2024, 12:16 p.m.