dot-SparkProcessorBase: Handles Amazon SageMaker processing tasks for jobs using...

.SparkProcessorBaseR Documentation

Handles Amazon SageMaker processing tasks for jobs using Spark.

Description

Base class for either PySpark or SparkJars.

Super classes

sagemaker.common::Processor -> sagemaker.common::ScriptProcessor -> .SparkProcessorBase

Methods

Public methods

Inherited methods

Method new()

Initialize a “_SparkProcessorBase“ instance. The _SparkProcessorBase handles Amazon SageMaker processing tasks for jobs using SageMaker Spark.

Usage
.SparkProcessorBase$new(
  role,
  instance_type,
  instance_count,
  framework_version = NULL,
  py_version = NULL,
  container_version = NULL,
  image_uri = NULL,
  volume_size_in_gb = 30,
  volume_kms_key = NULL,
  output_kms_key = NULL,
  max_runtime_in_seconds = NULL,
  base_job_name = NULL,
  sagemaker_session = NULL,
  env = NULL,
  tags = NULL,
  network_config = NULL
)
Arguments
role

(str): An AWS IAM role name or ARN. The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if it needs to access an AWS resource.

instance_type

(str): Type of EC2 instance to use for processing, for example, 'ml.c4.xlarge'.

instance_count

(int): The number of instances to run the Processing job with. Defaults to 1.

framework_version

(str): The version of SageMaker PySpark.

py_version

(str): The version of python.

container_version

(str): The version of spark container.

image_uri

(str): The container image to use for training.

volume_size_in_gb

(int): Size in GB of the EBS volume to use for storing data during processing (default: 30).

volume_kms_key

(str): A KMS key for the processing volume.

output_kms_key

(str): The KMS key id for all ProcessingOutputs.

max_runtime_in_seconds

(int): Timeout in seconds. After this amount of time Amazon SageMaker terminates the job regardless of its current status.

base_job_name

(str): Prefix for processing name. If not specified, the processor generates a default job name, based on the training image name and current timestamp.

sagemaker_session

(sagemaker.session.Session): Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the processor creates one using the default AWS configuration chain.

env

(dict): Environment variables to be passed to the processing job.

tags

([dict]): List of tags to be passed to the processing job. network_config (sagemaker.network.NetworkConfig): A NetworkConfig object that configures network isolation, encryption of inter-container traffic, security group IDs, and subnets.

network_config

(sagemaker.network.NetworkConfig): A NetworkConfig object that configures network isolation, encryption of inter-container traffic, security group IDs, and subnets.


Method get_run_args()

For processors (:class:'~sagemaker.spark.processing.PySparkProcessor', :class:'~sagemaker.spark.processing.SparkJar') that have special run() arguments, this object contains the normalized arguments for passing to :class:'~sagemaker.workflow.steps.ProcessingStep'.

Usage
.SparkProcessorBase$get_run_args(
  code,
  inputs = NULL,
  outputs = NULL,
  arguments = NULL
)
Arguments
code

(str): This can be an S3 URI or a local path to a file with the framework script to run.

inputs

(list[:class:'~sagemaker.processing.ProcessingInput']): Input files for the processing job. These must be provided as :class:'~sagemaker.processing.ProcessingInput' objects (default: None).

outputs

(list[:class:'~sagemaker.processing.ProcessingOutput']): Outputs for the processing job. These can be specified as either path strings or :class:'~sagemaker.processing.ProcessingOutput' objects (default: None).

arguments

(list[str]): A list of string arguments to be passed to a processing job (default: None).

Returns

Returns a RunArgs object.


Method run()

Runs a processing job.

Usage
.SparkProcessorBase$run(
  submit_app,
  inputs = NULL,
  outputs = NULL,
  arguments = NULL,
  wait = TRUE,
  logs = TRUE,
  job_name = NULL,
  experiment_config = NULL,
  kms_key = NULL
)
Arguments
submit_app

(str): .py or .jar file to submit to Spark as the primary application

inputs

(list[:class:'~sagemaker.processing.ProcessingInput']): Input files for the processing job. These must be provided as :class:'~sagemaker.processing.ProcessingInput' objects (default: None).

outputs

(list[:class:'~sagemaker.processing.ProcessingOutput']): Outputs for the processing job. These can be specified as either path strings or :class:'~sagemaker.processing.ProcessingOutput' objects (default: None).

arguments

(list[str]): A list of string arguments to be passed to a processing job (default: None).

wait

(bool): Whether the call should wait until the job completes (default: True).

logs

(bool): Whether to show the logs produced by the job. Only meaningful when wait is True (default: True).

job_name

(str): Processing job name. If not specified, the processor generates a default job name, based on the base job name and current timestamp.

experiment_config

(dict[str, str]): Experiment management configuration. Dictionary contains three optional keys: 'ExperimentName', 'TrialName', and 'TrialComponentDisplayName'.

kms_key

(str): The ARN of the KMS key that is used to encrypt the user code file (default: None).


Method start_history()

Starts a Spark history server.

Usage
.SparkProcessorBase$start_history(spark_event_logs_s3_uri = NULL)
Arguments
spark_event_logs_s3_uri

(str): S3 URI where Spark events are stored.


Method terminate_history_server()

Terminates the Spark history server.

Usage
.SparkProcessorBase$terminate_history_server()

Method clone()

The objects of this class are cloneable with this method.

Usage
.SparkProcessorBase$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


DyfanJones/sagemaker-r-mlframework documentation built on March 18, 2022, 7:41 a.m.