get_service_saswith methods for
storage_endpointobjects, and a similar R6 method for
?sasfor more information.
storage_load_rdsto handle compression correctly. In particular,
storage_load_rdsshould now correctly load files saved with
list_blobsto fail when leases were present.
read_csv2. This works around an issue introduced in readr 1.4.0 (#85, #86).
resource_type="d"when creating a service or user delegation SAS for blob storage.
storage_endpoint, to specify the service in question: blob, file, ADLS2, queue or table. This allows use of the generic endpoint function with URLs that don't fit the usual pattern where the service is part of the hostname, eg custom domain names, IP addresses, etc.
list_adls_filesto allow for a missing last-modified-date column (#82).
storage_read_delim(for tab-delimited files)
download_blob. This requires that the file's
Content-MD5property is set.
?upload_blobfor more details.
queue_endpoint(the latter from the AzureQstor package), passing the full URL including the account name:
blob_endpoint("http://127.0.0.1:10000/myaccount", key="mykey"). The warning about an unrecognised endpoint can be ignored. See the linked pages for full details on how to authenticate to the emulator. Note that the Azure SDK emulator is no longer being actively developed; it's recommended to use Azurite.
Content-MD5header in the HTTP requests, as an error-checking mechanism.
?blobfor more details.
list_blobs, thanks to @cantpitch.
list_storage_containersand related methods will now check for a continuation marker to avoid returning prematurely (thanks to @StatKalli for reporting and providing a fix).
get_account_sasS3 generic, with methods for
az_storageresource objects and client endpoints. The
az_storage$get_account_sasR6 method now simply calls the S3 method.
get_user_delegation_sas, with methods for
az_storageobjects and blob endpoints. Similar R6 methods added for
revoke_user_delegation_keysgeneric and methods to invalidate all user delegation keys for a storage account (and all SAS's generated with those keys).
?sasfor more information.
call_azcopynow uses the
azure_storage_azcopy_silentsystem option for its default value, falling back to FALSE if this is unset.
sign_requestas a generic, dispatching on the endpoint class. This is to allow for the fact that table storage uses a different signing scheme to the other storage services.
list_adls_filesthat could result in an error with large numbers of files.
call_storage_endpointthat sets the number of seconds to wait for an API call.
call_azcopynow uses the processx package under the hood, which is a powerful and flexible framework for running external programs from R. The interface is slightly changed: rather than taking the entire commandline as a single string,
call_azcopynow expects each AzCopy commandline option to be an individual argument. See
?call_azcopyfor examples of the new interface.
storage_file_existsgeneric to check for file existence, which dispatches to
adls_file_existsfor the individual storage types.
call_storage_endpoint; this fixes a bug where the token could expire during a long transfer.
list_adls_files, check that a field exists before trying to modify it (works around problem of possibly inconsistent response from the endpoint).
?(as generated by the Azure Portal and Storage Explorer) to the client functions.
az_storage$get_*_endpoint()methods now support passing an AAD token for authentication.
list_azure_filesis now the root, mirroring the behaviour for blobs and ADLSgen2.
list_azure_filesnow includes the full path as part of the file/directory name.
delete_azure_dirfor recursing through subdirectories. Like with file transfers, for Azure file storage this can be slow, so try to use a non-recursive solution where possible.
list_azure_filesmore consistent. The first 2 columns for a data frame output are now always
size; the size of a directory is NA. The 3rd column for non-blobs is
isdirwhich is TRUE/FALSE depending on whether the object is a directory or file. Any additional columns remain storage type-specific.
set_storage_metadatamethods for managing user-specified properties (metadata) for objects.
get_storage_propertiesrather than having specific functions for blobs, files and directories.
adls_endpoint) with an invalid URL will now warn, instead of throwing an error. This enables using tools like Azurite, which use a local address as the endpoint. Calling
storage_endpointwith an invalid URL will still throw an error, as the function has no way of telling which storage service is required.
call_storage_endpointto allow direct calls to the storage account endpoint.
NULLdownload destination will now actually return a raw vector as opposed to a connection, matching what the documentation says.
utils::askYesNo; as a side-effect, Windows users who are using RGUI.exe will see a popup dialog box instead of a message in the terminal.
copy_url_to_blobfunction, for directly copying a HTTP[S] URL to blob storage. The corresponding generic is
copy_url_to_storage, with a method for blob containers (only).
options(azure_storage_retries=N)where N >= 0. Setting this option to zero disables retrying.
download_from_urlbugs introduced in last update.
textConnectionobject. For downloading, if
NULL, the downloaded data is returned as a raw vector, or if
rawConnection, in the connection object. See the examples in the documentation.
use_azcopy=TRUEin any upload or download function to call AzCopy rather than relying on internal R code. The
call_azcopyfunction also allows you to run AzCopy with arbitrary arguments. Requires AzCopy version 10.
list_storage_containersfor managing containers (blob containers, file shares, ADLSgen2 filesystems)
storage_multidownloadfor file transfers
delete_storage_filefor managing objects within a container
upload_azure_fileto 4MB, the maximum permitted by the API (#5).
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.