PCA | R Documentation |
As a result, number of features within a dataset is reduced but the dataset still retain as much information as possible.
sagemaker.mlcore::EstimatorBase
-> sagemaker.mlcore::AmazonAlgorithmEstimatorBase
-> PCA
repo_name
sagemaker repo name for framework
repo_version
version of framework
DEFAULT_MINI_BATCH_SIZE
The size of each mini-batch to use when training.
.module
mimic python module
num_components
The number of principal components. Must be greater than zero.
algorithm_mode
Mode for computing the principal components.
subtract_mean
Whether the data should be unbiased both during train and at inference.
extra_components
As the value grows larger, the solution becomes more accurate but the runtime and memory consumption increase linearly.
sagemaker.mlcore::EstimatorBase$latest_job_debugger_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_profiler_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_tensorboard_artifacts_path()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$hyperparameters()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$prepare_workflow_for_training()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$training_image_uri()
new()
A Principal Components Analysis (PCA) :class:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase'. This Estimator may be fit via calls to :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.fit_ndarray' or :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.fit'. The former allows a PCA model to be fit on a 2-dimensional numpy array. The latter requires Amazon :class:'~sagemaker.amazon.record_pb2.Record' protobuf serialized data to be stored in S3. To learn more about the Amazon protobuf Record class and how to prepare bulk data in this format, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html After this Estimator is fit, model data is stored in S3. The model may be deployed to an Amazon SageMaker Endpoint by invoking :meth:'~sagemaker.amazon.estimator.EstimatorBase.deploy'. As well as deploying an Endpoint, deploy returns a :class:'~sagemaker.amazon.pca.PCAPredictor' object that can be used to project input vectors to the learned lower-dimensional representation, using the trained PCA model hosted in the SageMaker Endpoint. PCA Estimators can be configured by setting hyperparameters. The available hyperparameters for PCA are documented below. For further information on the AWS PCA algorithm, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/pca.html This Estimator uses Amazon SageMaker PCA to perform training and host deployed models. To learn more about Amazon SageMaker PCA, please read: https://docs.aws.amazon.com/sagemaker/latest/dg/how-pca-works.html
PCA$new( role, instance_count, instance_type, num_components, algorithm_mode = NULL, subtract_mean = NULL, extra_components = NULL, ... )
role
(str): An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if accessing AWS resource.
instance_count
(int): Number of Amazon EC2 instances to use for training.
instance_type
(str): Type of EC2 instance to use for training, for example, 'ml.c4.xlarge'.
num_components
(int): The number of principal components. Must be greater than zero.
algorithm_mode
(str): Mode for computing the principal components. One of 'regular' or 'randomized'.
subtract_mean
(bool): Whether the data should be unbiased both during train and at inference.
extra_components
(int): As the value grows larger, the solution becomes more accurate but the runtime and memory consumption increase linearly. If this value is unset or set to -1, then a default value equal to the maximum of 10 and num_components will be used. Valid for randomized mode only.
...
: base class keyword argument values.
create_model()
Return a :class:'~sagemaker.amazon.pca.PCAModel' referencing the latest s3 model data produced by this Estimator.
PCA$create_model(vpc_config_override = "VPC_CONFIG_DEFAULT", ...)
vpc_config_override
(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.
...
: Additional kwargs passed to the PCAModel constructor.
.prepare_for_training()
Set hyperparameters needed for training.
PCA$.prepare_for_training(records, mini_batch_size = NULL, job_name = NULL)
records
(:class:'~RecordSet'): The records to train this “Estimator“ on.
mini_batch_size
(int or None): The size of each mini-batch to use when training. If “None“, a default value will be used.
job_name
(str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.
clone()
The objects of this class are cloneable with this method.
PCA$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.