datapipeline: AWS Data Pipeline

View source: R/paws.R

datapipelineR Documentation

AWS Data Pipeline

Description

AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data.

AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management.

AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.

Usage

datapipeline(
  config = list(),
  credentials = list(),
  endpoint = NULL,
  region = NULL
)

Arguments

config

Optional configuration of credentials, endpoint, and/or region.

  • credentials:

    • creds:

      • access_key_id: AWS access key ID

      • secret_access_key: AWS secret access key

      • session_token: AWS temporary session token

    • profile: The name of a profile to use. If not given, then the default profile is used.

    • anonymous: Set anonymous credentials.

  • endpoint: The complete URL to use for the constructed client.

  • region: The AWS Region used in instantiating the client.

  • close_connection: Immediately close all HTTP connections.

  • timeout: The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.

  • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. ⁠http://s3.amazonaws.com/BUCKET/KEY⁠.

  • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html

credentials

Optional credentials shorthand for the config parameter

  • creds:

    • access_key_id: AWS access key ID

    • secret_access_key: AWS secret access key

    • session_token: AWS temporary session token

  • profile: The name of a profile to use. If not given, then the default profile is used.

  • anonymous: Set anonymous credentials.

endpoint

Optional shorthand for complete URL to use for the constructed client.

region

Optional shorthand for AWS Region used in instantiating the client.

Value

A client for the service. You can call the service's operations using syntax like svc$operation(...), where svc is the name you've assigned to the client. The available operations are listed in the Operations section.

Service syntax

svc <- datapipeline(
  config = list(
    credentials = list(
      creds = list(
        access_key_id = "string",
        secret_access_key = "string",
        session_token = "string"
      ),
      profile = "string",
      anonymous = "logical"
    ),
    endpoint = "string",
    region = "string",
    close_connection = "logical",
    timeout = "numeric",
    s3_force_path_style = "logical",
    sts_regional_endpoint = "string"
  ),
  credentials = list(
    creds = list(
      access_key_id = "string",
      secret_access_key = "string",
      session_token = "string"
    ),
    profile = "string",
    anonymous = "logical"
  ),
  endpoint = "string",
  region = "string"
)

Operations

activate_pipeline Validates the specified pipeline and starts processing pipeline tasks
add_tags Adds or modifies tags for the specified pipeline
create_pipeline Creates a new, empty pipeline
deactivate_pipeline Deactivates the specified running pipeline
delete_pipeline Deletes a pipeline, its pipeline definition, and its run history
describe_objects Gets the object definitions for a set of objects associated with the pipeline
describe_pipelines Retrieves metadata about one or more pipelines
evaluate_expression Task runners call EvaluateExpression to evaluate a string in the context of the specified object
get_pipeline_definition Gets the definition of the specified pipeline
list_pipelines Lists the pipeline identifiers for all active pipelines that you have permission to access
poll_for_task Task runners call PollForTask to receive a task to perform from AWS Data Pipeline
put_pipeline_definition Adds tasks, schedules, and preconditions to the specified pipeline
query_objects Queries the specified pipeline for the names of objects that match the specified set of conditions
remove_tags Removes existing tags from the specified pipeline
report_task_progress Task runners call ReportTaskProgress when assigned a task to acknowledge that it has the task
report_task_runner_heartbeat Task runners call ReportTaskRunnerHeartbeat every 15 minutes to indicate that they are operational
set_status Requests that the status of the specified physical or logical pipeline objects be updated in the specified pipeline
set_task_status Task runners call SetTaskStatus to notify AWS Data Pipeline that a task is completed and provide information about the final status
validate_pipeline_definition Validates the specified pipeline definition to ensure that it is well formed and can be run without error

Examples

## Not run: 
svc <- datapipeline()
svc$activate_pipeline(
  Foo = 123
)

## End(Not run)


paws documentation built on Sept. 17, 2024, 5:07 p.m.

Related to datapipeline in paws...