timestreamwrite: Amazon Timestream Write

View source: R/paws.R

timestreamwriteR Documentation

Amazon Timestream Write

Description

Amazon Timestream is a fast, scalable, fully managed time-series database service that makes it easy to store and analyze trillions of time-series data points per day. With Timestream, you can easily store and analyze IoT sensor data to derive insights from your IoT applications. You can analyze industrial telemetry to streamline equipment management and maintenance. You can also store and analyze log data and metrics to improve the performance and availability of your applications.

Timestream is built from the ground up to effectively ingest, process, and store time-series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs.

Usage

timestreamwrite(
  config = list(),
  credentials = list(),
  endpoint = NULL,
  region = NULL
)

Arguments

config

Optional configuration of credentials, endpoint, and/or region.

  • credentials:

    • creds:

      • access_key_id: AWS access key ID

      • secret_access_key: AWS secret access key

      • session_token: AWS temporary session token

    • profile: The name of a profile to use. If not given, then the default profile is used.

    • anonymous: Set anonymous credentials.

  • endpoint: The complete URL to use for the constructed client.

  • region: The AWS Region used in instantiating the client.

  • close_connection: Immediately close all HTTP connections.

  • timeout: The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.

  • s3_force_path_style: Set this to true to force the request to use path-style addressing, i.e. ⁠http://s3.amazonaws.com/BUCKET/KEY⁠.

  • sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html

credentials

Optional credentials shorthand for the config parameter

  • creds:

    • access_key_id: AWS access key ID

    • secret_access_key: AWS secret access key

    • session_token: AWS temporary session token

  • profile: The name of a profile to use. If not given, then the default profile is used.

  • anonymous: Set anonymous credentials.

endpoint

Optional shorthand for complete URL to use for the constructed client.

region

Optional shorthand for AWS Region used in instantiating the client.

Value

A client for the service. You can call the service's operations using syntax like svc$operation(...), where svc is the name you've assigned to the client. The available operations are listed in the Operations section.

Service syntax

svc <- timestreamwrite(
  config = list(
    credentials = list(
      creds = list(
        access_key_id = "string",
        secret_access_key = "string",
        session_token = "string"
      ),
      profile = "string",
      anonymous = "logical"
    ),
    endpoint = "string",
    region = "string",
    close_connection = "logical",
    timeout = "numeric",
    s3_force_path_style = "logical",
    sts_regional_endpoint = "string"
  ),
  credentials = list(
    creds = list(
      access_key_id = "string",
      secret_access_key = "string",
      session_token = "string"
    ),
    profile = "string",
    anonymous = "logical"
  ),
  endpoint = "string",
  region = "string"
)

Operations

create_batch_load_task Creates a new Timestream batch load task
create_database Creates a new Timestream database
create_table Adds a new table to an existing database in your account
delete_database Deletes a given Timestream database
delete_table Deletes a given Timestream table
describe_batch_load_task Returns information about the batch load task, including configurations, mappings, progress, and other details
describe_database Returns information about the database, including the database name, time that the database was created, and the total number of tables found within the database
describe_endpoints Returns a list of available endpoints to make Timestream API calls against
describe_table Returns information about the table, including the table name, database name, retention duration of the memory store and the magnetic store
list_batch_load_tasks Provides a list of batch load tasks, along with the name, status, when the task is resumable until, and other details
list_databases Returns a list of your Timestream databases
list_tables Provides a list of tables, along with the name, status, and retention properties of each table
list_tags_for_resource Lists all tags on a Timestream resource
resume_batch_load_task Resume batch load task
tag_resource Associates a set of tags with a Timestream resource
untag_resource Removes the association of tags from a Timestream resource
update_database Modifies the KMS key for an existing database
update_table Modifies the retention duration of the memory store and magnetic store for your Timestream table
write_records Enables you to write your time-series data into Timestream

Examples

## Not run: 
svc <- timestreamwrite()
svc$create_batch_load_task(
  Foo = 123
)

## End(Not run)


paws documentation built on Sept. 17, 2024, 5:07 p.m.

Related to timestreamwrite in paws...