View source: R/workflow_airflow.R
training_config | R Documentation |
Export Airflow training config from an estimator
training_config( estimator, inputs = NULL, job_name = NULL, mini_batch_size = NULL )
estimator |
(sagemaker.estimator.EstimatorBase): The estimator to export training config from. Can be a BYO estimator, Framework estimator or Amazon algorithm estimator. |
inputs |
: Information about the training data. Please refer to the “fit()“ method of the associated estimator, as this can take any of the following forms: * (str) - The S3 location where training data is saved. * (dict[str, str] or dict[str, sagemaker.inputs.TrainingInput]) - If using multiple channels for training data, you can specify a dict mapping channel names to strings or :func:'~sagemaker.inputs.TrainingInput' objects. * (sagemaker.inputs.TrainingInput) - Channel configuration for S3 data sources that can provide additional information about the training dataset. See :func:'sagemaker.inputs.TrainingInput' for full details. * (sagemaker.amazon.amazon_estimator.RecordSet) - A collection of Amazon :class:~'Record' objects serialized and stored in S3. For use with an estimator for an Amazon algorithm. * (list[sagemaker.amazon.amazon_estimator.RecordSet]) - A list of :class:~'sagemaker.amazon.amazon_estimator.RecordSet' objects, where each instance is a different channel of training data. |
job_name |
(str): Specify a training job name if needed. |
mini_batch_size |
(int): Specify this argument only when estimator is a built-in estimator of an Amazon algorithm. For other estimators, batch size should be specified in the estimator. |
list: Training config that can be directly used by SageMakerTrainingOperator in Airflow.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.