createVggModel3D: 3-D implementation of the VGG deep learning architecture.

View source: R/createVggModel.R

createVggModel3DR Documentation

3-D implementation of the VGG deep learning architecture.

Description

Creates a keras model of the Vgg deep learning architecture for image recognition based on the paper

Usage

createVggModel3D(
  inputImageSize,
  numberOfOutputs = 1000,
  layers = c(1, 2, 3, 4, 4),
  lowestResolution = 64,
  convolutionKernelSize = c(3, 3, 3),
  poolSize = c(2, 2, 2),
  strides = c(2, 2, 2),
  numberOfDenseUnits = 4096,
  dropoutRate = 0,
  style = 19,
  mode = c("classification", "regression")
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

numberOfOutputs

Number of segmentation labels.

layers

a vector determining the number of 'filters' defined at for each layer.

lowestResolution

number of filters at the beginning.

convolutionKernelSize

3-d vector definining the kernel size during the encoding path

poolSize

3-d vector defining the region for each pooling layer.

strides

3-d vector describing the stride length in each direction.

numberOfDenseUnits

integer for the number of units in the last layers.

dropoutRate

float between 0 and 1 to use between dense layers.

style

⁠'16'⁠ or ⁠'19'⁠ for VGG16 or VGG19, respectively.

mode

'classification' or 'regression'.

Details

K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition

available here:

    \url{https://arxiv.org/abs/1409.1556}

This particular implementation was influenced by the following python implementation:

    \url{https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d}

Value

a VGG keras model

Author(s)

Tustison NJ

Examples

# Simple examples, must run successfully and quickly. These will be tested.
## Not run: 

library( ANTsRNet )
library( keras )

mnistData <- dataset_mnist()
numberOfLabels <- 10

# Extract a small subset for something that can run quickly

X_trainSmall <- mnistData$train$x[1:100,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:100], numberOfLabels )

X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )

# We add a dimension of 1 to specify the channel size

inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )

model <- createVggModel2D( inputImageSize = inputImageSize,
  numberOfOutputs = numberOfLabels )

model %>% compile( loss = 'categorical_crossentropy',
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = c( 'categorical_crossentropy', 'accuracy' ) )

track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
  epochs = 2, batch_size = 20, shuffle = TRUE, validation_split = 0.25 )

# Now test the model

testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
predictedData <- model %>% predict( X_testSmall, verbose = 1 )


## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.