View source: R/createVggModel.R
createVggModel3D | R Documentation |
Creates a keras model of the Vgg deep learning architecture for image recognition based on the paper
createVggModel3D(
inputImageSize,
numberOfOutputs = 1000,
layers = c(1, 2, 3, 4, 4),
lowestResolution = 64,
convolutionKernelSize = c(3, 3, 3),
poolSize = c(2, 2, 2),
strides = c(2, 2, 2),
numberOfDenseUnits = 4096,
dropoutRate = 0,
style = 19,
mode = c("classification", "regression")
)
inputImageSize |
Used for specifying the input tensor shape. The shape (or dimension) of that tensor is the image dimensions followed by the number of channels (e.g., red, green, and blue). The batch size (i.e., number of training images) is not specified a priori. |
numberOfOutputs |
Number of segmentation labels. |
layers |
a vector determining the number of 'filters' defined at for each layer. |
lowestResolution |
number of filters at the beginning. |
convolutionKernelSize |
3-d vector definining the kernel size during the encoding path |
poolSize |
3-d vector defining the region for each pooling layer. |
strides |
3-d vector describing the stride length in each direction. |
numberOfDenseUnits |
integer for the number of units in the last layers. |
dropoutRate |
float between 0 and 1 to use between dense layers. |
style |
|
mode |
'classification' or 'regression'. |
K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition
available here:
\url{https://arxiv.org/abs/1409.1556}
This particular implementation was influenced by the following python implementation:
\url{https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d}
a VGG keras model
Tustison NJ
# Simple examples, must run successfully and quickly. These will be tested.
## Not run:
library( ANTsRNet )
library( keras )
mnistData <- dataset_mnist()
numberOfLabels <- 10
# Extract a small subset for something that can run quickly
X_trainSmall <- mnistData$train$x[1:100,,]
X_trainSmall <- array( data = X_trainSmall, dim = c( dim( X_trainSmall ), 1 ) )
Y_trainSmall <- to_categorical( mnistData$train$y[1:100], numberOfLabels )
X_testSmall <- mnistData$test$x[1:10,,]
X_testSmall <- array( data = X_testSmall, dim = c( dim( X_testSmall ), 1 ) )
Y_testSmall <- to_categorical( mnistData$test$y[1:10], numberOfLabels )
# We add a dimension of 1 to specify the channel size
inputImageSize <- c( dim( X_trainSmall )[2:3], 1 )
model <- createVggModel2D( inputImageSize = inputImageSize,
numberOfOutputs = numberOfLabels )
model %>% compile( loss = 'categorical_crossentropy',
optimizer = optimizer_adam( lr = 0.0001 ),
metrics = c( 'categorical_crossentropy', 'accuracy' ) )
track <- model %>% fit( X_trainSmall, Y_trainSmall, verbose = 1,
epochs = 2, batch_size = 20, shuffle = TRUE, validation_split = 0.25 )
# Now test the model
testingMetrics <- model %>% evaluate( X_testSmall, Y_testSmall )
predictedData <- model %>% predict( X_testSmall, verbose = 1 )
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.