LDA: An unsupervised learning algorithm attempting to describe...

LDAR Documentation

An unsupervised learning algorithm attempting to describe data as distinct categories.

Description

LDA is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics.

Super classes

sagemaker.mlcore::EstimatorBase -> sagemaker.mlcore::AmazonAlgorithmEstimatorBase -> LDA

Public fields

repo_name

sagemaker repo name for framework

repo_version

version of framework

.module

mimic python module

Active bindings

num_topics

The number of topics for LDA to find within the data

alpha0

Initial guess for the concentration parameter

max_restarts

The number of restarts to perform during the Alternating Least Squares

max_iterations

The maximum number of iterations to perform during the ALS phase of the algorithm.

tol

Target error tolerance for the ALS phase of the algorithm.

Methods

Public methods

Inherited methods

Method new()

Latent Dirichlet Allocation (LDA) is :class:'Estimator' used for unsupervised learning. Amazon SageMaker Latent Dirichlet Allocation is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. LDA is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. This Estimator may be fit via calls to :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.fit'. It requires Amazon :class:'~sagemaker.amazon.record_pb2.Record' protobuf serialized data to be stored in S3. There is an utility :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.record_set' that can be used to upload data to S3 and creates :class:'~sagemaker.amazon.amazon_estimator.RecordSet' to be passed to the 'fit' call. To learn more about the Amazon protobuf Record class and how to prepare bulk data in this format, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html After this Estimator is fit, model data is stored in S3. The model may be deployed to an Amazon SageMaker Endpoint by invoking :meth:'~sagemaker.amazon.estimator.EstimatorBase.deploy'. As well as deploying an Endpoint, deploy returns a :class:'~sagemaker.amazon.lda.LDAPredictor' object that can be used for inference calls using the trained model hosted in the SageMaker Endpoint. LDA Estimators can be configured by setting hyperparameters. The available hyperparameters for LDA are documented below. For further information on the AWS LDA algorithm, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/lda.html

Usage
LDA$new(
  role,
  instance_type,
  num_topics,
  alpha0 = NULL,
  max_restarts = NULL,
  max_iterations = NULL,
  tol = NULL,
  ...
)
Arguments
role

(str): An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if accessing AWS resource.

instance_type

(str): Type of EC2 instance to use for training, for example, 'ml.c4.xlarge'.

num_topics

(int): The number of topics for LDA to find within the data.

alpha0

(float): Optional. Initial guess for the concentration parameter

max_restarts

(int): Optional. The number of restarts to perform during the Alternating Least Squares (ALS) spectral decomposition phase of the algorithm.

max_iterations

(int): Optional. The maximum number of iterations to perform during the ALS phase of the algorithm.

tol

(float): Optional. Target error tolerance for the ALS phase of the algorithm.

...

: base class keyword argument values.


Method create_model()

Return a :class:'~sagemaker.amazon.LDAModel' referencing the latest s3 model data produced by this Estimator.

Usage
LDA$create_model(vpc_config_override = "VPC_CONFIG_DEFAULT", ...)
Arguments
vpc_config_override

(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.

...

: Additional kwargs passed to the LDAModel constructor.


Method .prepare_for_training()

Set hyperparameters needed for training. This method will also validate “source_dir“.

Usage
LDA$.prepare_for_training(records, mini_batch_size = NULL, job_name = NULL)
Arguments
records

(RecordSet) – The records to train this Estimator on.

mini_batch_size

(int or None) – The size of each mini-batch to use when training. If None, a default value will be used.

job_name

(str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.


Method clone()

The objects of this class are cloneable with this method.

Usage
LDA$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.


DyfanJones/sagemaker-r-mlframework documentation built on March 18, 2022, 7:41 a.m.