PyTorch | R Documentation |
Handle end-to-end training and deployment of custom PyTorch code.
sagemaker.mlcore::EstimatorBase
-> sagemaker.mlcore::Framework
-> PyTorch
.module
mimic python module
new()
This “Estimator“ executes an PyTorch script in a managed PyTorch execution environment, within a SageMaker Training Job. The managed PyTorch environment is an Amazon-built Docker container that executes functions defined in the supplied “entry_point“ Python script. Training is started by calling :meth:'~sagemaker.amazon.estimator.Framework.fit' on this Estimator. After training is complete, calling :meth:'~sagemaker.amazon.estimator.Framework.deploy' creates a hosted SageMaker endpoint and returns an :class:'~sagemaker.amazon.pytorch.model.PyTorchPredictor' instance that can be used to perform inference against the hosted model. Technical documentation on preparing PyTorch scripts for SageMaker training and using the PyTorch Estimator is available on the project home-page: https://github.com/aws/sagemaker-python-sdk
PyTorch$new( entry_point, framework_version = NULL, py_version = NULL, source_dir = NULL, hyperparameters = NULL, image_uri = NULL, distribution = NULL, ... )
entry_point
(str): Path (absolute or relative) to the Python source file which should be executed as the entry point to training. If “source_dir“ is specified, then “entry_point“ must point to a file located at the root of “source_dir“.
framework_version
(str): PyTorch version you want to use for executing your model training code. Defaults to “None“. Required unless “image_uri“ is provided. List of supported versions: https://github.com/aws/sagemaker-python-sdk#pytorch-sagemaker-estimators.
py_version
(str): Python version you want to use for executing your model training code. One of 'py2' or 'py3'. Defaults to “None“. Required unless “image_uri“ is provided.
source_dir
(str): Path (absolute, relative or an S3 URI) to a directory with any other training source code dependencies aside from the entry point file (default: None). If “source_dir“ is an S3 URI, it must point to a tar.gz file. Structure within this directory are preserved when training on Amazon SageMaker.
hyperparameters
(dict): Hyperparameters that will be used for training (default: None). The hyperparameters are made accessible as a dict[str, str] to the training code on SageMaker. For convenience, this accepts other types for keys and values, but “str()“ will be called to convert them before training.
image_uri
(str): If specified, the estimator will use this image for training and hosting, instead of selecting the appropriate SageMaker official image based on framework_version and py_version. It can be an ECR url or dockerhub image and tag. Examples: * “123412341234.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0“ * “custom-image:latest“ If “framework_version“ or “py_version“ are “None“, then “image_uri“ is required. If also “None“, then a “ValueError“ will be raised.
distribution
(list): A dictionary with information on how to run distributed training (default: None). Currently, the following are supported: distributed training with parameter servers, SageMaker Distributed (SMD) Data and Model Parallelism, and MPI. SMD Model Parallelism can only be used with MPI. To enable parameter server use the following setup:
...
: Additional kwargs passed to the :class:'~sagemaker.estimator.Framework' constructor.
hyperparameters()
Return hyperparameters used by your custom PyTorch code during model training.
PyTorch$hyperparameters()
create_model()
Create a SageMaker “PyTorchModel“ object that can be deployed to an “Endpoint“.
PyTorch$create_model( model_server_workers = NULL, role = NULL, vpc_config_override = "VPC_CONFIG_DEFAULT", entry_point = NULL, source_dir = NULL, dependencies = NULL, ... )
model_server_workers
(int): Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
role
(str): The “ExecutionRoleArn“ IAM Role ARN for the “Model“, which is also used during transform jobs. If not specified, the role from the Estimator will be used.
vpc_config_override
(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.
entry_point
(str): Path (absolute or relative) to the local Python source file which should be executed as the entry point to training. If “source_dir“ is specified, then “entry_point“ must point to a file located at the root of “source_dir“. If not specified, the training entry point is used.
source_dir
(str): Path (absolute or relative) to a directory with any other serving source code dependencies aside from the entry point file. If not specified, the model source directory from training is used.
dependencies
(list[str]): A list of paths to directories (absolute or relative) with any additional libraries that will be exported to the container. If not specified, the dependencies from training are used. This is not supported with "local code" in Local Mode.
...
: Additional kwargs passed to the :class:'~sagemaker.pytorch.model.PyTorchModel' constructor.
sagemaker.pytorch.model.PyTorchModel: A SageMaker “PyTorchModel“ object. See :func:'~sagemaker.pytorch.model.PyTorchModel' for full details.
clone()
The objects of this class are cloneable with this method.
PyTorch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.