MXNetModel | R Documentation |
An MXNet SageMaker “Model“ that can be deployed to a SageMaker “Endpoint“.
sagemaker.mlcore::ModelBase
-> sagemaker.mlcore::Model
-> sagemaker.mlcore::FrameworkModel
-> MXNetModel
.LOWEST_MMS_VERSION
Lowest Multi Model Server MXNet version that can be executed
new()
Initialize an MXNetModel.
MXNetModel$new( model_data, role, entry_point, framework_version = NULL, py_version = NULL, image_uri = NULL, predictor_cls = MXNetPredictor, model_server_workers = NULL, ... )
model_data
(str): The S3 location of a SageMaker model data “.tar.gz“ file.
role
(str): An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.
entry_point
(str): Path (absolute or relative) to the Python source file which should be executed as the entry point to model hosting. If “source_dir“ is specified, then “entry_point“ must point to a file located at the root of “source_dir“.
framework_version
(str): MXNet version you want to use for executing your model training code. Defaults to “None“. Required unless “image_uri“ is provided.
py_version
(str): Python version you want to use for executing your model training code. Defaults to “None“. Required unless “image_uri“ is provided.
image_uri
(str): A Docker image URI (default: None). If not specified, a default image for MXNet will be used. If “framework_version“ or “py_version“ are “None“, then “image_uri“ is required. If also “None“, then a “ValueError“ will be raised.
predictor_cls
(callable[str, sagemaker.session.Session]): A function to call to create a predictor with an endpoint name and SageMaker “Session“. If specified, “deploy()“ returns the result of invoking this function on the created endpoint name.
model_server_workers
(int): Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
...
: Keyword arguments passed to the superclass :class:'~sagemaker.model.FrameworkModel' and, subsequently, its superclass :class:'~sagemaker.model.Model'.
prepare_container_def()
Return a container definition with framework configuration set in model environment variables.
MXNetModel$prepare_container_def(instance_type = NULL, accelerator_type = NULL)
instance_type
(str): The EC2 instance type to deploy this Model to. For example, 'ml.p2.xlarge'.
accelerator_type
(str): The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model. For example, 'ml.eia1.medium'.
dict[str, str]: A container definition object usable with the CreateModel API.
serving_image_uri()
Create a URI for the serving image.
MXNetModel$serving_image_uri( region_name, instance_type, accelerator_type = NULL )
region_name
(str): AWS region where the image is uploaded.
instance_type
(str): SageMaker instance type. Used to determine device type (cpu/gpu/family-specific optimized).
accelerator_type
(str): The Elastic Inference accelerator type to deploy to the instance for loading and making inferences to the model (default: None). For example, 'ml.eia1.medium'.
str: The appropriate image URI based on the given parameters.
clone()
The objects of this class are cloneable with this method.
MXNetModel$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.