createResNetModel3D: 3-D implementation of the ResNet deep learning architecture.

View source: R/createResNetModel.R

createResNetModel3DR Documentation

3-D implementation of the ResNet deep learning architecture.

Description

Creates a keras model of the ResNet deep learning architecture for image classification. The paper is available here:

Usage

createResNetModel3D(
  inputImageSize,
  inputScalarsSize = 0,
  numberOfOutputs = 1000,
  layers = 1:4,
  residualBlockSchedule = c(3, 4, 6, 3),
  lowestResolution = 64,
  cardinality = 1,
  squeezeAndExcite = FALSE,
  mode = c("classification", "regression")
)

Arguments

inputImageSize

Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori.

inputScalarsSize

Optional integer specifying the size of the input vector for scalars that get concatenated to the fully connected layer at the end of the network.

numberOfOutputs

Number of segmentation labels.

layers

a vector determining the number of 'filters' defined at for each layer.

residualBlockSchedule

vector defining the how many residual blocks repeats.

lowestResolution

number of filters at the initial layer.

cardinality

perform ResNet (cardinality = 1) or ResNeXt (cardinality != 1 but powers of 2—try '32' )

squeezeAndExcite

boolean to add the squeeze-and-excite block variant.

mode

'classification', 'sigmoid' or 'regression'.

Details

    https://arxiv.org/abs/1512.03385

This particular implementation was influenced by the following python implementation:

    https://gist.github.com/mjdietzx/0cb95922aac14d446a6530f87b3a04ce

Value

an ResNet keras model

Author(s)

Tustison NJ

Examples


## Not run: 

library( ANTsRNet )
library( keras )

mnistData <- dataset_mnist()
numberOfLabels <- 10

# Extract a small subset for something that can run quickly

X_trainSmall <- mnistData$train$x[1:10,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:10], numberOfLabels )

X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )

# We add a dimension of 1 to specify the channel size

inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )

model <- createResNetModel2D( inputImageSize = inputImageSize,
  numberOfOutputs = numberOfLabels )

model %>% compile( loss = 'categorical_crossentropy',
  optimizer = optimizer_adam( lr = 0.0001 ),
  metrics = c( 'categorical_crossentropy', 'accuracy' ) )

track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
  epochs = 1, batch_size = 2, shuffle = TRUE, validation_split = 0.5 )

# Now test the model

testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
predictedData <- model %>% predict( X_testSmall, verbose = 1 )


## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 25, 2024, 10:27 p.m.