createUnetModel3D: 3-D image segmentation implementation of the U-net deep...

View source: R/createUnetModel.R

createUnetModel3DR Documentation

3-D image segmentation implementation of the U-net deep learning architecture.

Description

Creates a keras model of the U-net deep learning architecture for image segmentation and regression. More information is provided at the authors' website:

Usage

createUnetModel3D(
  inputImageSize,
  numberOfOutputs = 2,
  scalarOutputSize = 0,
  scalarOutputActivation = "relu",
  numberOfLayers = 4,
  numberOfFiltersAtBaseLayer = 32,
  numberOfFilters = NULL,
  convolutionKernelSize = c(3, 3, 3),
  deconvolutionKernelSize = c(2, 2, 2),
  poolSize = c(2, 2, 2),
  strides = c(2, 2, 2),
  dropoutRate = 0,
  weightDecay = 0,
  mode = c("classification", "regression", "sigmoid"),
  additionalOptions = NA
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

numberOfOutputs

Meaning depends on the mode. For 'classification' this is the number of segmentation labels. For 'regression' this is the number of outputs.

scalarOutputSize

if greater than 0, a global average pooling from each encoding layer is concatenated to a dense layer as a secondary output.

scalarOutputActivation

activation for nonzero output scalar.

numberOfLayers

number of encoding/decoding layers.

numberOfFiltersAtBaseLayer

number of filters at the beginning and end of the ⁠'U'⁠. Doubles at each descending/ascending layer.

numberOfFilters

vector explicitly setting the number of filters at each layer. One can either set this or numberOfLayers and

convolutionKernelSize

3-d vector defining the kernel size during the encoding path.

deconvolutionKernelSize

3-d vector defining the kernel size during the decoding.

poolSize

3-d vector defining the region for each pooling layer.

strides

3-d vector describing the stride length in each direction.

dropoutRate

float between 0 and 1 to use between dense layers.

weightDecay

weighting parameter for L2 regularization of the kernel weights of the convolution layers. Default = 0.0.

mode

'classification' or 'regression' or 'sigmoid'.

additionalOptions

string or vector of strings specifying specific configuration add-ons/tweaks:

Details

    \url{https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/}

with the paper available here:

    \url{https://arxiv.org/abs/1505.04597}

This particular implementation was influenced by the following python implementation:

    \url{https://github.com/joelthelion/ultrasound-nerve-segmentation}

Value

a u-net keras model

Author(s)

Tustison NJ

Examples

# Simple examples, must run successfully and quickly. These will be tested.

library( ANTsRNet )
library( keras )

model <- createUnetModel3D( c( 64, 64, 64, 1 ) )

metric_multilabel_dice_coefficient <-
  custom_metric( "multilabel_dice_coefficient",
    multilabel_dice_coefficient )

loss_dice <- function( y_true, y_pred ) {
  -multilabel_dice_coefficient(y_true, y_pred)
}
attr(loss_dice, "py_function_name") <- "multilabel_dice_coefficient"

model %>% compile( loss = loss_dice,
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = metric_multilabel_dice_coefficient )

print( model )


ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.