NTM | R Documentation |
The resulting topics contain word groupings based on their statistical distribution. Documents that contain frequent occurrences of words such as "bike", "car", "train", "mileage", and "speed" are likely to share a topic on "transportation" for example.
sagemaker.mlcore::EstimatorBase
-> sagemaker.mlcore::AmazonAlgorithmEstimatorBase
-> NTM
repo_name
sagemaker repo name for framework
repo_version
version of framework
.module
mimic python module
num_topics
The number of topics for NTM to find within the data
encoder_layers
Represents number of layers in the encoder and the output size of each layer
epochs
Maximum number of passes over the training data.
encoder_layers_activation
Activation function to use in the encoder layers.
optimizer
Optimizer to use for training.
tolerance
Maximum relative change in the loss function within the last num_patience_epochs number of epochs below which early stopping is triggered.
num_patience_epochs
Number of successive epochs over which early stopping criterion is evaluated.
batch_norm
Whether to use batch normalization during training.
rescale_gradient
Rescale factor for gradient
clip_gradient
Maximum magnitude for each gradient component.
weight_decay
Weight decay coefficient.
learning_rate
Learning rate for the optimizer.
sagemaker.mlcore::EstimatorBase$latest_job_debugger_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_profiler_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_tensorboard_artifacts_path()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$hyperparameters()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$prepare_workflow_for_training()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$training_image_uri()
new()
Neural Topic Model (NTM) is :class:'Estimator' used for unsupervised learning. This Estimator may be fit via calls to :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.fit'. It requires Amazon :class:'~sagemaker.amazon.record_pb2.Record' protobuf serialized data to be stored in S3. There is an utility :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.record_set' that can be used to upload data to S3 and creates :class:'~sagemaker.amazon.amazon_estimator.RecordSet' to be passed to the 'fit' call. To learn more about the Amazon protobuf Record class and how to prepare bulk data in this format, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html After this Estimator is fit, model data is stored in S3. The model may be deployed to an Amazon SageMaker Endpoint by invoking :meth:'~sagemaker.amazon.estimator.EstimatorBase.deploy'. As well as deploying an Endpoint, deploy returns a :class:'~sagemaker.amazon.ntm.NTMPredictor' object that can be used for inference calls using the trained model hosted in the SageMaker Endpoint. NTM Estimators can be configured by setting hyperparameters. The available hyperparameters for NTM are documented below. For further information on the AWS NTM algorithm, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/ntm.html
NTM$new( role, instance_count, instance_type, num_topics, encoder_layers = NULL, epochs = NULL, encoder_layers_activation = NULL, optimizer = NULL, tolerance = NULL, num_patience_epochs = NULL, batch_norm = NULL, rescale_gradient = NULL, clip_gradient = NULL, weight_decay = NULL, learning_rate = NULL, ... )
role
(str): An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if accessing AWS resource.
instance_count
(int): Number of Amazon EC2 instances to use for training.
instance_type
(str): Type of EC2 instance to use for training, for example, 'ml.c4.xlarge'.
num_topics
(int): Required. The number of topics for NTM to find within the data.
encoder_layers
(list): Optional. Represents number of layers in the encoder and the output size of each layer.
epochs
(int): Optional. Maximum number of passes over the training data.
encoder_layers_activation
(str): Optional. Activation function to use in the encoder layers.
optimizer
(str): Optional. Optimizer to use for training.
tolerance
(float): Optional. Maximum relative change in the loss function within the last num_patience_epochs number of epochs below which early stopping is triggered.
num_patience_epochs
(int): Optional. Number of successive epochs over which early stopping criterion is evaluated.
batch_norm
(bool): Optional. Whether to use batch normalization during training.
rescale_gradient
(float): Optional. Rescale factor for gradient.
clip_gradient
(float): Optional. Maximum magnitude for each gradient component.
weight_decay
(float): Optional. Weight decay coefficient. Adds L2 regularization.
learning_rate
(float): Optional. Learning rate for the optimizer.
...
: base class keyword argument values.
create_model()
Return a :class:'~sagemaker.amazon.NTMModel' referencing the latest s3 model data produced by this Estimator.
NTM$create_model(vpc_config_override = "VPC_CONFIG_DEFAULT", ...)
vpc_config_override
(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.
...
: Additional kwargs passed to the NTMModel constructor.
.prepare_for_training()
Set hyperparameters needed for training. This method will also validate “source_dir“.
NTM$.prepare_for_training(records, mini_batch_size, job_name = NULL)
records
(RecordSet) – The records to train this Estimator on.
mini_batch_size
(int or None) – The size of each mini-batch to use when training. If None, a default value will be used.
job_name
(str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.
clone()
The objects of this class are cloneable with this method.
NTM$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.