application_convnext_base | R Documentation |
Instantiates the ConvNeXtBase architecture.
application_convnext_base(
model_name = "convnext_base",
include_top = TRUE,
include_preprocessing = TRUE,
weights = "imagenet",
input_tensor = NULL,
input_shape = NULL,
pooling = NULL,
classes = 1000L,
classifier_activation = "softmax"
)
model_name |
String, name for the model. |
include_top |
Whether to include the fully-connected
layer at the top of the network. Defaults to |
include_preprocessing |
Boolean, whether to include the preprocessing layer at the bottom of the network. |
weights |
One of |
input_tensor |
Optional Keras tensor
(i.e. output of |
input_shape |
Optional shape tuple, only to be specified
if |
pooling |
Optional pooling mode for feature extraction
when
|
classes |
Optional number of classes to classify images
into, only to be specified if |
classifier_activation |
A |
A model instance.
A ConvNet for the 2020s (CVPR 2022)
For image classification use cases, see this page for detailed examples. For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning.
The base
, large
, and xlarge
models were first pre-trained on the
ImageNet-21k dataset and then fine-tuned on the ImageNet-1k dataset. The
pre-trained parameters of the models were assembled from the
official repository. To get a
sense of how these parameters were converted to Keras compatible parameters,
please refer to
this repository.
Each Keras Application expects a specific kind of input preprocessing.
For ConvNeXt, preprocessing is included in the model using a Normalization
layer. ConvNeXt models expect their inputs to be float or uint8 tensors of
pixels with values in the [0-255]
range.
When calling the summary()
method after instantiating a ConvNeXt model,
prefer setting the expand_nested
argument summary()
to TRUE
to better
investigate the instantiated model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.