Description Usage Arguments Details Value See Also Examples
Batch transactions for table storage
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | create_table_operation(
endpoint,
path,
options = list(),
headers = list(),
body = NULL,
metadata = c("none", "minimal", "full"),
http_verb = c("GET", "PUT", "POST", "PATCH", "DELETE", "HEAD")
)
create_batch_transaction(endpoint, operations)
do_batch_transaction(transaction, ...)
## S3 method for class 'batch_transaction'
do_batch_transaction(
transaction,
batch_status_handler = c("warn", "stop", "message", "pass"),
num_retries = 10,
...
)
|
endpoint |
A table storage endpoint, of class |
path |
The path component of the operation. |
options |
A named list giving the query parameters for the operation. |
headers |
A named list giving any additional HTTP headers to send to the host. AzureCosmosR will handle authentication details, so you don't have to specify these here. |
body |
The request body for a PUT/POST/PATCH operation. |
metadata |
The level of ODATA metadata to include in the response. |
http_verb |
The HTTP verb (method) for the operation. |
operations |
A list of individual table operation objects, each of class |
transaction |
For |
... |
Arguments passed to lower-level functions. |
batch_status_handler |
For |
num_retries |
The number of times to retry the call, if the response is a HTTP error 429 (too many requests). The Cosmos DB endpoint tends to be aggressive at rate-limiting requests, to maintain the desired level of latency. This will generally not affect calls to an endpoint provided by a storage account. |
Table storage supports batch transactions on entities that are in the same table and belong to the same partition group. Batch transactions are also known as entity group transactions.
You can use create_table_operation
to produce an object corresponding to a single table storage operation, such as inserting, deleting or updating an entity. Multiple such objects can then be passed to create_batch_transaction
, which bundles them into a single atomic transaction. Call do_batch_transaction
to send the transaction to the endpoint.
Note that batch transactions are subject to some limitations imposed by the REST API:
All entities subject to operations as part of the transaction must have the same PartitionKey
value.
An entity can appear only once in the transaction, and only one operation may be performed against it.
The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size.
create_table_operation
returns an object of class table_operation
.
Assuming the batch transaction did not fail due to rate-limiting, do_batch_transaction
returns a list of objects of class table_operation_response
, representing the results of each individual operation. Each object contains elements named status
, headers
and body
containing the respective parts of the response. Note that the number of returned objects may be smaller than the number of operations in the batch, if the transaction failed.
import_table_entities, which uses (multiple) batch transactions under the hood
Performing entity group transactions
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | ## Not run:
endp <- table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
tab <- create_storage_table(endp, "mytable")
## a simple batch insert
ir <- subset(iris, Species == "setosa")
# property names must be valid C# variable names
names(ir) <- sub("\\.", "_", names(ir))
# create the PartitionKey and RowKey properties
ir$PartitionKey <- ir$Species
ir$RowKey <- sprintf("%03d", seq_len(nrow(ir)))
# generate the array of insert operations: 1 per row
ops <- lapply(seq_len(nrow(ir)), function(i)
create_table_operation(endp, "mytable", body=ir[i, ], http_verb="POST")))
# create a batch transaction and send it to the endpoint
bat <- create_batch_transaction(endp, ops)
do_batch_transaction(bat)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.