layer_efficient_attention_3d: Efficient attention layer (3-D)

View source: R/attentionUtilities.R

layer_efficient_attention_3dR Documentation

Efficient attention layer (3-D)

Description

Wraps the EfficientAttentionLayer3D modified from the following python implementation

Usage

layer_efficient_attention_3d(
  object,
  numberOfFiltersFG = 4L,
  numberOfFiltersH = 8L,
  kernelSize = 1L,
  poolSize = 2L,
  doConcatenateFinalLayers = FALSE,
  trainable = TRUE
)

Arguments

object

Object to compose layer with. This is either a keras::keras_model_sequential to add the layer to or another Layer which this layer will call.

numberOfFiltersFG

number of filters for F and G layers.

numberOfFiltersH

number of filters for H. If = NA, only use filter F for efficiency.

kernelSize

kernel size in convolution layer.

poolSize

pool size in max pool layer.

doConcatenateFinalLayers

concatenate final layer with input. Alternatively, add. Default = FALSE

Details

https://github.com/taki0112/Self-Attention-GAN-Tensorflow

based on the following paper:

https://arxiv.org/abs/1805.08318

Value

a keras layer tensor

Examples


## Not run: 
library( keras )
library( ANTsRNet )

inputShape <- c( 100, 100, 100, 3 )
input <- layer_input( shape = inputShape )

numberOfFiltersFG <- 64L
outputs <- input %>% layer_efficient_attention_3d( numberOfFiltersFG )

model <- keras_model( inputs = input, outputs = outputs )

## End(Not run)


ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.