createDenseNetModel3D: 3-D implementation of the DenseNet deep learning...

View source: R/createDenseNetModel.R

createDenseNetModel3DR Documentation

3-D implementation of the DenseNet deep learning architecture.

Description

Creates a keras model of the DenseNet deep learning architecture for image recognition based on the paper

Usage

createDenseNetModel3D(
  inputImageSize,
  numberOfOutputs = 1000,
  numberOfFilters = 16,
  depth = 7,
  numberOfDenseBlocks = 1,
  growthRate = 12,
  dropoutRate = 0.2,
  weightDecay = 1e-04,
  mode = "classification"
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

numberOfOutputs

Number of segmentation labels.

numberOfFilters

number of filters

depth

number of layers—must be equal to 3 * N + 4 where N is an integer (default = 7).

numberOfDenseBlocks

number of dense blocks to add to the end (default = 1).

growthRate

number of filters to add for each dense block layer (default = 12).

dropoutRate

= per drop out layer rate (default = 0.2)

weightDecay

= weight decay (default = 1e-4)

mode

'classification' or 'regression'. Default = 'classification'.

Details

G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. Densely Connected Convolutional Networks Networks

available here:

    https://arxiv.org/abs/1608.06993

This particular implementation was influenced by the following python implementation:

    https://github.com/tdeboissiere/DeepLearningImplementations/blob/master/DenseNet/densenet.py

Value

an DenseNet keras model

Author(s)

Tustison NJ

Examples


## Not run: 

library( ANTsRNet )
library( keras )

mnistData <- dataset_mnist()
numberOfLabels <- 10

# Extract a small subset for something that can run quickly

X_trainSmall <- mnistData$train$x[1:10,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:10], numberOfLabels )

X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )

# We add a dimension of 1 to specify the channel size

inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )

model <- createDenseNetModel2D( inputImageSize = inputImageSize,
  numberOfOutputs = numberOfLabels )

model %>% compile( loss = 'categorical_crossentropy',
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = c( 'categorical_crossentropy', 'accuracy' ) )

track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
  epochs = 1, batch_size = 2, shuffle = TRUE, validation_split = 0.5 )

# Now test the model

testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
predictedData <- model %>% predict( X_testSmall, verbose = 1 )


## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.