Chainer | R Documentation |
Handle end-to-end training and deployment of custom Chainer code.
sagemaker.mlcore::EstimatorBase
-> sagemaker.mlcore::Framework
-> Chainer
.use_mpi
Entry point is run as an MPI script.
.num_processes
Total number of processes to run the entry point with
.process_slots_per_host
The number of processes that can run on each instance.
.additional_mpi_options
String of options to the 'mpirun' command used to run the entry point.
.module
mimic python module
new()
This “Estimator“ executes an Chainer script in a managed Chainer execution environment, within a SageMaker Training Job. The managed Chainer environment is an Amazon-built Docker container that executes functions defined in the supplied “entry_point“ Python script. Training is started by calling :meth:'~sagemaker.amazon.estimator.Framework.fit' on this Estimator. After training is complete, calling :meth:'~sagemaker.amazon.estimator.Framework.deploy' creates a hosted SageMaker endpoint and returns an :class:'~sagemaker.amazon.chainer.model.ChainerPredictor' instance that can be used to perform inference against the hosted model. Technical documentation on preparing Chainer scripts for SageMaker training and using the Chainer Estimator is available on the project home-page: https://github.com/aws/sagemaker-python-sdk
Chainer$new( entry_point, use_mpi = NULL, num_processes = NULL, process_slots_per_host = NULL, additional_mpi_options = NULL, source_dir = NULL, hyperparameters = NULL, framework_version = NULL, py_version = NULL, image_uri = NULL, ... )
entry_point
(str): Path (absolute or relative) to the Python source file which should be executed as the entry point to training. If “source_dir“ is specified, then “entry_point“ must point to a file located at the root of “source_dir“.
use_mpi
(bool): If true, entry point is run as an MPI script. By default, the Chainer Framework runs the entry point with 'mpirun' if more than one instance is used.
num_processes
(int): Total number of processes to run the entry point with. By default, the Chainer Framework runs one process per GPU (on GPU instances), or one process per host (on CPU instances).
process_slots_per_host
(int): The number of processes that can run on each instance. By default, this is set to the number of GPUs on the instance (on GPU instances), or one (on CPU instances).
additional_mpi_options
(str): String of options to the 'mpirun' command used to run the entry point. For example, '-X NCCL_DEBUG=WARN' will pass that option string to the mpirun command.
source_dir
(str): Path (absolute or relative) to a directory with any other training source code dependencies aside from the entry point file (default: None). Structure within this directory are preserved when training on Amazon SageMaker.
hyperparameters
(dict): Hyperparameters that will be used for training (default: None). The hyperparameters are made accessible as a dict[str, str] to the training code on SageMaker. For convenience, this accepts other types for keys and values, but “str()“ will be called to convert them before training.
framework_version
(str): Chainer version you want to use for executing your model training code. Defaults to “None“. Required unless “image_uri“ is provided. List of supported versions: https://github.com/aws/sagemaker-python-sdk#chainer-sagemaker-estimators.
py_version
(str): Python version you want to use for executing your model training code. Defaults to “None“. Required unless “image_uri“ is provided.
image_uri
(str): If specified, the estimator will use this image for training and hosting, instead of selecting the appropriate SageMaker official image based on framework_version and py_version. It can be an ECR url or dockerhub image and tag. Examples * “123412341234.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0“ * “custom-image:latest“ If “framework_version“ or “py_version“ are “None“, then “image_uri“ is required. If also “None“, then a “ValueError“ will be raised.
...
: Additional kwargs passed to the :class:'~sagemaker.estimator.Framework' constructor.
hyperparameters()
Return hyperparameters used by your custom Chainer code during training.
Chainer$hyperparameters()
create_model()
Create a SageMaker “ChainerModel“ object that can be deployed to an “Endpoint“.
Chainer$create_model( model_server_workers = NULL, role = NULL, vpc_config_override = "VPC_CONFIG_DEFAULT", entry_point = NULL, source_dir = NULL, dependencies = NULL, ... )
model_server_workers
(int): Optional. The number of worker processes used by the inference server. If None, server will use one worker per vCPU.
role
(str): The “ExecutionRoleArn“ IAM Role ARN for the “Model“, which is also used during transform jobs. If not specified, the role from the Estimator will be used.
vpc_config_override
(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.
entry_point
(str): Path (absolute or relative) to the local Python source file which should be executed as the entry point to training. If “source_dir“ is specified, then “entry_point“ must point to a file located at the root of “source_dir“. If not specified, the training entry point is used.
source_dir
(str): Path (absolute or relative) to a directory with any other serving source code dependencies aside from the entry point file. If not specified, the model source directory from training is used.
dependencies
(list[str]): A list of paths to directories (absolute or relative) with any additional libraries that will be exported to the container. If not specified, the dependencies from training are used. This is not supported with "local code" in Local Mode.
...
: Additional kwargs passed to the ChainerModel constructor.
sagemaker.chainer.model.ChainerModel: A SageMaker “ChainerModel“ object. See :func:'~sagemaker.chainer.model.ChainerModel' for full details.
clone()
The objects of this class are cloneable with this method.
Chainer$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.