RandomCutForest | R Documentation |
These are observations which diverge from otherwise well-structured or patterned data. Anomalies can manifest as unexpected spikes in time series data, breaks in periodicity, or unclassifiable data points.
sagemaker.mlcore::EstimatorBase
-> sagemaker.mlcore::AmazonAlgorithmEstimatorBase
-> RandomCutForest
repo_name
sagemaker repo name for framework
repo_version
version of framework
MINI_BATCH_SIZE
The size of each mini-batch to use when training.
.module
mimic python module
eval_metrics
JSON list of metrics types to be used for reporting the score for the model
num_trees
The number of trees used in the forest.
num_samples_per_tree
The number of samples used to build each tree in the forest.
feature_dim
Doc string place
sagemaker.mlcore::EstimatorBase$latest_job_debugger_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_profiler_artifacts_path()
sagemaker.mlcore::EstimatorBase$latest_job_tensorboard_artifacts_path()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$hyperparameters()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$prepare_workflow_for_training()
sagemaker.mlcore::AmazonAlgorithmEstimatorBase$training_image_uri()
new()
An 'Estimator' class implementing a Random Cut Forest. Typically used for anomaly detection, this Estimator may be fit via calls to :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.fit'. It requires Amazon :class:'~sagemaker.amazon.record_pb2.Record' protobuf serialized data to be stored in S3. There is an utility :meth:'~sagemaker.amazon.amazon_estimator.AmazonAlgorithmEstimatorBase.record_set' that can be used to upload data to S3 and creates :class:'~sagemaker.amazon.amazon_estimator.RecordSet' to be passed to the 'fit' call. To learn more about the Amazon protobuf Record class and how to prepare bulk data in this format, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html After this Estimator is fit, model data is stored in S3. The model may be deployed to an Amazon SageMaker Endpoint by invoking :meth:'~sagemaker.amazon.estimator.EstimatorBase.deploy'. As well as deploying an Endpoint, deploy returns a :class:'~sagemaker.amazon.ntm.RandomCutForestPredictor' object that can be used for inference calls using the trained model hosted in the SageMaker Endpoint. RandomCutForest Estimators can be configured by setting hyperparameters. The available hyperparameters for RandomCutForest are documented below. For further information on the AWS Random Cut Forest algorithm, please consult AWS technical documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html
RandomCutForest$new( role, instance_count, instance_type, num_samples_per_tree = NULL, num_trees = NULL, eval_metrics = NULL, ... )
role
(str): An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role, if accessing AWS resource.
instance_count
(int): Number of Amazon EC2 instances to use for training.
instance_type
(str): Type of EC2 instance to use for training, for example, 'ml.c4.xlarge'.
num_samples_per_tree
(int): Optional. The number of samples used to build each tree in the forest. The total number of samples drawn from the train dataset is num_trees * num_samples_per_tree.
num_trees
(int): Optional. The number of trees used in the forest.
eval_metrics
(list): Optional. JSON list of metrics types to be used for reporting the score for the model. Allowed values are "accuracy", "precision_recall_fscore": positive and negative precision, recall, and f1 scores. If test data is provided, the score shall be reported in terms of all requested metrics.
...
: base class keyword argument values.
create_model()
Return a :class:'~sagemaker.amazon.RandomCutForestModel' referencing the latest s3 model data produced by this Estimator.
RandomCutForest$create_model(vpc_config_override = "VPC_CONFIG_DEFAULT", ...)
vpc_config_override
(dict[str, list[str]]): Optional override for VpcConfig set on the model. Default: use subnets and security groups from this Estimator. * 'Subnets' (list[str]): List of subnet ids. * 'SecurityGroupIds' (list[str]): List of security group ids.
...
: Additional kwargs passed to the RandomCutForestModel constructor.
.prepare_for_training()
Set hyperparameters needed for training. This method will also validate “source_dir“.
RandomCutForest$.prepare_for_training( records, mini_batch_size = NULL, job_name = NULL )
records
(RecordSet) – The records to train this Estimator on.
mini_batch_size
(int or None) – The size of each mini-batch to use when training. If None, a default value will be used.
job_name
(str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.
clone()
The objects of this class are cloneable with this method.
RandomCutForest$clone(deep = FALSE)
deep
Whether to make a deep clone.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.