createWideResNetModel3D: 3-D implementation of the Wide ResNet deep learning...

View source: R/createWideResNetModel.R

createWideResNetModel3DR Documentation

3-D implementation of the Wide ResNet deep learning architecture.

Description

Creates a keras model of the Wide ResNet deep learning architecture for image classification/regression. The paper is available here:

Usage

createWideResNetModel3D(
  inputImageSize,
  numberOfOutputs = 1000,
  depth = 2,
  width = 1,
  residualBlockSchedule = c(16, 32, 64),
  poolSize = c(8, 8, 8),
  dropoutRate = 0,
  weightDecay = 5e-04,
  mode = c("classification", "regression")
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

numberOfOutputs

Number of classification labels.

depth

integer determining the depth of the ntwork. Related to the actual number of layers by the numberOfLayers = depth * 6 + 4. Default = 2 (such that numberOfLayers = 16.)

width

integer determining the width of the network. Default = 1.

residualBlockSchedule

vector determining the number of filters per convolutional block. Default = c( 16, 32, 64 ).

poolSize

pool size for final average pooling layer. Default = c( 8, 8, 8 ).

dropoutRate

Dropout percentage. Default = 0.0.

weightDecay

weight for l2 regularizer in convolution layers. Default = 0.0005.

mode

'classification' or 'regression'.

Details

    https://arxiv.org/abs/1512.03385

This particular implementation was influenced by the following python implementation:

    https://github.com/titu1994/Wide-Residual-Networks

Value

a Wide ResNet keras model

Author(s)

Tustison NJ

Examples


## Not run: 

library( ANTsRNet )
library( keras )

mnistData <- dataset_mnist()
numberOfLabels <- 10

# Extract a small subset for something that can run quickly

X_trainSmall <- mnistData$train$x[1:10,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:10], numberOfLabels )

X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )

# We add a dimension of 1 to specify the channel size

inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )

model <- createWideResNetModel2D( inputImageSize = inputImageSize,
  numberOfOutputs = numberOfLabels )

model %>% compile( loss = 'categorical_crossentropy',
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = c( 'categorical_crossentropy', 'accuracy' ) )

# Comment out the rest due to travis build constraints

# track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
#   epochs = 1, batch_size = 2, shuffle = TRUE, validation_split = 0.5 )

# Now test the model

# testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
# predictedData <- model %>% predict( X_testSmall, verbose = 1 )


## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.