View source: R/attentionUtilities.R
layer_efficient_attention_3d | R Documentation |
Wraps the EfficientAttentionLayer3D modified from the following python implementation
layer_efficient_attention_3d(
object,
numberOfFiltersFG = 4L,
numberOfFiltersH = 8L,
kernelSize = 1L,
poolSize = 2L,
doConcatenateFinalLayers = FALSE,
trainable = TRUE
)
object |
Object to compose layer with. This is either a keras::keras_model_sequential to add the layer to or another Layer which this layer will call. |
numberOfFiltersFG |
number of filters for F and G layers. |
numberOfFiltersH |
number of filters for H. If |
kernelSize |
kernel size in convolution layer. |
poolSize |
pool size in max pool layer. |
doConcatenateFinalLayers |
concatenate final layer with input. Alternatively, add. Default = FALSE |
https://github.com/taki0112/Self-Attention-GAN-Tensorflow
based on the following paper:
https://arxiv.org/abs/1805.08318
a keras layer tensor
## Not run:
library( keras )
library( ANTsRNet )
inputShape <- c( 100, 100, 100, 3 )
input <- layer_input( shape = inputShape )
numberOfFiltersFG <- 64L
outputs <- input %>% layer_efficient_attention_3d( numberOfFiltersFG )
model <- keras_model( inputs = input, outputs = outputs )
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.