createUnetModel2D: 2-D implementation of the U-net deep learning architecture.

View source: R/createUnetModel.R

createUnetModel2DR Documentation

2-D implementation of the U-net deep learning architecture.

Description

Creates a keras model of the U-net deep learning architecture for image segmentation and regression. More information is provided at the authors' website:

Usage

createUnetModel2D(
  inputImageSize,
  numberOfOutputs = 2,
  scalarOutputSize = 0,
  scalarOutputActivation = "relu",
  numberOfLayers = 4,
  numberOfFiltersAtBaseLayer = 32,
  numberOfFilters = NULL,
  convolutionKernelSize = c(3, 3),
  deconvolutionKernelSize = c(2, 2),
  poolSize = c(2, 2),
  strides = c(2, 2),
  dropoutRate = 0,
  weightDecay = 0,
  mode = c("classification", "regression", "sigmoid"),
  additionalOptions = NA
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

numberOfOutputs

Meaning depends on the mode. For 'classification' this is the number of segmentation labels. For 'regression' this is the number of outputs.

scalarOutputSize

if greater than 0, a global average pooling from each encoding layer is concatenated to a dense layer as a secondary output.

scalarOutputActivation

activation for nonzero output scalar.

numberOfLayers

number of encoding/decoding layers.

numberOfFiltersAtBaseLayer

number of filters at the beginning and end of the ⁠'U'⁠. Doubles at each descending/ascending layer.

numberOfFilters

vector explicitly setting the number of filters at each layer. One can either set this or numberOfLayers and numberOfFiltersAtBaseLayer. Default = NULL.

convolutionKernelSize

2-d vector defining the kernel size during the encoding path.

deconvolutionKernelSize

2-d vector defining the kernel size during the decoding.

poolSize

2-d vector defining the region for each pooling layer.

strides

2-d vector describing the stride length in each direction.

dropoutRate

float between 0 and 1 to use between dense layers.

weightDecay

weighting parameter for L2 regularization of the kernel weights of the convolution layers. Default = 0.0.

mode

'classification' or 'regression' or 'sigmoid'.

additionalOptions

string or vector of strings specifying specific configuration add-ons/tweaks:

Details

    \url{https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/}

with the paper available here:

    \url{https://arxiv.org/abs/1505.04597}

This particular implementation was influenced by the following python implementation:

    \url{https://github.com/joelthelion/ultrasound-nerve-segmentation}

Value

a u-net keras model

Author(s)

Tustison NJ

Examples


library( ANTsR )
library( ANTsRNet )
library( keras )

imageIDs <- c( "r16", "r27", "r30", "r62", "r64", "r85" )
trainingBatchSize <- length( imageIDs )

# Perform simple 3-tissue segmentation.

segmentationLabels <- c( 1, 2, 3 )
numberOfLabels <- length( segmentationLabels )
initialization <- paste0( 'KMeans[', numberOfLabels, ']' )

domainImage <- antsImageRead( getANTsRData( imageIDs[1] ) )

X_train <- array( data = NA, dim = c( trainingBatchSize, dim( domainImage ), 1 ) )
Y_train <- array( data = NA, dim = c( trainingBatchSize, dim( domainImage ) ) )

images <- list()
segmentations <- list()

for( i in seq_len( trainingBatchSize ) )
  {
  cat( "Processing image", imageIDs[i], "\n" )
  image <- antsImageRead( getANTsRData( imageIDs[i] ) )
  mask <- getMask( image )
  segmentation <- atropos( image, mask, initialization )$segmentation

  X_train[i,,, 1] <- as.array( image )
  Y_train[i,,] <- as.array( segmentation )
  }
Y_train <- encodeUnet( Y_train, segmentationLabels )

# Perform a simple normalization

X_train <- ( X_train - mean( X_train ) ) / sd( X_train )

# Create the model

model <- createUnetModel2D( c( dim( domainImage ), 1 ),
  numberOfOutputs = numberOfLabels )

metric_multilabel_dice_coefficient <-
  custom_metric( "multilabel_dice_coefficient",
    multilabel_dice_coefficient )

loss_dice <- function( y_true, y_pred ) {
  -multilabel_dice_coefficient(y_true, y_pred)
}
attr(loss_dice, "py_function_name") <- "multilabel_dice_coefficient"

model %>% compile( loss = loss_dice,
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = metric_multilabel_dice_coefficient )

# Comment out the rest due to travis build constraints

# Fit the model

# track <- model %>% fit( X_train, Y_train,
#              epochs = 100, batch_size = 5, verbose = 1, shuffle = TRUE,
#              callbacks = list(
#                callback_model_checkpoint( "unetModelInterimWeights.h5",
#                    monitor = 'val_loss', save_best_only = TRUE ),
#                callback_reduce_lr_on_plateau( monitor = "val_loss", factor = 0.1 )
#              ),
#              validation_split = 0.2 )

# Save the model and/or save the model weights

# save_model_hdf5( model, filepath = 'unetModel.h5' )
# save_model_weights_hdf5( unetModel, filepath = 'unetModelWeights.h5' ) )


ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.