View source: R/createAlexNetModel.R
createAlexNetModel3D | R Documentation |
Creates a keras model of the AlexNet deep learning architecture for image recognition based on the paper
createAlexNetModel3D(
inputImageSize,
numberOfOutputs = 1000,
numberOfDenseUnits = 4096,
dropoutRate = 0,
mode = c("classification", "regression"),
batch_size = NULL
)
inputImageSize |
Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori. |
numberOfOutputs |
Number of segmentation labels. |
numberOfDenseUnits |
number of dense units. |
dropoutRate |
optional regularization parameter between |
mode |
'classification' or 'regression'. Default = 'classification'. |
batch_size |
batch size to pass to first layer |
A. Krizhevsky, and I. Sutskever, and G. Hinton. ImageNet Classification with Deep Convolutional Neural Networks.
available here:
http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
This particular implementation was influenced by the following python implementation:
https://github.com/duggalrahul/AlexNet-Experiments-Keras/ https://github.com/lunardog/convnets-keras/
an AlexNet keras model
Tustison NJ
## Not run:
library( ANTsRNet )
library( keras )
mnistData <- dataset_mnist()
numberOfLabels <- 10
# Extract a small subset for something that can run quickly
X_trainSmall <- mnistData$train$x[1:100,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:100], numberOfLabels )
X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )
# We add a dimension of 1 to specify the channel size
rm(mnistData); gc()
inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )
model <- createAlexNetModel2D( inputImageSize = inputImageSize,
numberOfOutputs = numberOfLabels )
model %>% compile( loss = 'categorical_crossentropy',
optimizer = optimizer_adam( lr = 0.0001 ),
metrics = c( 'categorical_crossentropy', 'accuracy' ) )
gc()
track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
epochs = 2, batch_size = 20, shuffle = TRUE, validation_split = 0.25 )
# Now test the model
testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
predictedData <- model %>% predict( X_testSmall, verbose = 1 )
rm(model); gc()
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.