list_blobs | R Documentation |
Upload, download, or delete a blob; list blobs in a container; create or delete directories; check blob availability.
list_blobs(container, dir = "/", info = c("partial", "name", "all"), prefix = NULL, recursive = TRUE) upload_blob(container, src, dest = basename(src), type = c("BlockBlob", "AppendBlob"), blocksize = if (type == "BlockBlob") 2^24 else 2^22, lease = NULL, put_md5 = FALSE, append = FALSE, use_azcopy = FALSE) multiupload_blob(container, src, dest, recursive = FALSE, type = c("BlockBlob", "AppendBlob"), blocksize = if (type == "BlockBlob") 2^24 else 2^22, lease = NULL, put_md5 = FALSE, append = FALSE, use_azcopy = FALSE, max_concurrent_transfers = 10) download_blob(container, src, dest = basename(src), blocksize = 2^24, overwrite = FALSE, lease = NULL, check_md5 = FALSE, use_azcopy = FALSE, snapshot = NULL, version = NULL) multidownload_blob(container, src, dest, recursive = FALSE, blocksize = 2^24, overwrite = FALSE, lease = NULL, check_md5 = FALSE, use_azcopy = FALSE, max_concurrent_transfers = 10) delete_blob(container, blob, confirm = TRUE) create_blob_dir(container, dir) delete_blob_dir(container, dir, recursive = FALSE, confirm = TRUE) blob_exists(container, blob) blob_dir_exists(container, dir) copy_url_to_blob(container, src, dest, lease = NULL, async = FALSE, auth_header = NULL) multicopy_url_to_blob(container, src, dest, lease = NULL, async = FALSE, max_concurrent_transfers = 10, auth_header = NULL)
container |
A blob container object. |
dir |
For |
info |
For |
prefix |
For |
recursive |
For the multiupload/download functions, whether to recursively transfer files in subdirectories. For |
src, dest |
The source and destination files for uploading and downloading. See 'Details' below. |
type |
When uploading, the type of blob to create. Currently only block and append blobs are supported. |
blocksize |
The number of bytes to upload/download per HTTP(S) request. |
lease |
The lease for a blob, if present. |
put_md5 |
For uploading, whether to compute the MD5 hash of the blob(s). This will be stored as part of the blob's properties. Only used for block blobs. |
append |
When uploading, whether to append the uploaded data to the destination blob. Only has an effect if |
use_azcopy |
Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R. |
max_concurrent_transfers |
For |
overwrite |
When downloading, whether to overwrite an existing destination file. |
check_md5 |
For downloading, whether to verify the MD5 hash of the downloaded blob(s). This requires that the blob's |
snapshot, version |
For |
blob |
A string naming a blob. |
confirm |
Whether to ask for confirmation on deleting a blob. |
async |
For |
auth_header |
For |
upload_blob
and download_blob
are the workhorse file transfer functions for blobs. They each take as inputs a single filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, src
can be a textConnection or rawConnection object; and for downloading, dest
can be NULL or a rawConnection
object. If dest
is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.
multiupload_blob
and multidownload_blob
are functions for uploading and downloading multiple files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:
Both src
and dest
can be vectors naming the individual source and destination pathnames.
The src
argument can be a wildcard pattern expanding to one or more files, with dest
naming a destination directory. In this case, if recursive
is true, the file transfer will replicate the source directory structure at the destination.
upload_blob
and download_blob
can display a progress bar to track the file transfer. You can control whether to display this with options(azure_storage_progress_bar=TRUE|FALSE)
; the default is TRUE.
multiupload_blob
can upload files either as all block blobs or all append blobs, but not a mix of both.
blob_exists
and blob_dir_exists
test for the existence of a blob and directory, respectively.
delete_blob
deletes a blob, and delete_blob_dir
deletes all blobs in a directory (possibly recursively). This will also delete any snapshots for the blob(s) involved.
upload_blob
and download_blob
have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the use_azcopy
argument to TRUE.
The following points should be noted about AzCopy:
It only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.
Currently, it does not support appending data to existing blobs.
Blob storage does not have true directories, instead using filenames containing a separator character (typically '/') to mimic a directory structure. This has some consequences:
The isdir
column in the data frame output of list_blobs
is a best guess as to whether an object represents a file or directory, and may not always be correct. Currently, list_blobs
assumes that any object with a file size of zero is a directory.
Zero-length files can cause problems for the blob storage service as a whole (not just AzureStor). Try to avoid uploading such files.
create_blob_dir
and delete_blob_dir
are guaranteed to function as expected only for accounts with hierarchical namespaces enabled. When this feature is disabled, directories do not exist as objects in their own right: to create a directory, simply upload a blob to that directory. To delete a directory, delete all the blobs within it; as far as the blob storage service is concerned, the directory then no longer exists.
Similarly, the output of list_blobs(recursive=TRUE)
can vary based on whether the storage account has hierarchical namespaces enabled.
blob_exists
will return FALSE for a directory when the storage account does not have hierarchical namespaces enabled.
copy_url_to_blob
transfers the contents of the file at the specified HTTP[S] URL directly to blob storage, without requiring a temporary local copy to be made. multicopy_url_to_blob
does the same, for multiple URLs at once. These functions have a current file size limit of 256MB.
For list_blobs
, details on the blobs in the container. For download_blob
, if dest=NULL
, the contents of the downloaded blob as a raw vector. For blob_exists
a flag whether the blob exists.
blob_container, az_storage, storage_download, call_azcopy, list_blob_snapshots, list_blob_versions
AzCopy version 10 on GitHub Guide to the different blob types
## Not run: cont <- blob_container("https://mystorage.blob.core.windows.net/mycontainer", key="access_key") list_blobs(cont) upload_blob(cont, "~/bigfile.zip", dest="bigfile.zip") download_blob(cont, "bigfile.zip", dest="~/bigfile_downloaded.zip") delete_blob(cont, "bigfile.zip") # uploading/downloading multiple files at once multiupload_blob(cont, "/data/logfiles/*.zip", "/uploaded_data") multiupload_blob(cont, "myproj/*") # no dest directory uploads to root multidownload_blob(cont, "jan*.*", "/data/january") # append blob: concatenating multiple files into one upload_blob(cont, "logfile1", "logfile", type="AppendBlob", append=FALSE) upload_blob(cont, "logfile2", "logfile", type="AppendBlob", append=TRUE) upload_blob(cont, "logfile3", "logfile", type="AppendBlob", append=TRUE) # you can also pass a vector of file/pathnames as the source and destination src <- c("file1.csv", "file2.csv", "file3.csv") dest <- paste0("uploaded_", src) multiupload_blob(cont, src, dest) # uploading serialized R objects via connections json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE) con <- textConnection(json) upload_blob(cont, con, "iris.json") rds <- serialize(iris, NULL) con <- rawConnection(rds) upload_blob(cont, con, "iris.rds") # downloading files into memory: as a raw vector, and via a connection rawvec <- download_blob(cont, "iris.json", NULL) rawToChar(rawvec) con <- rawConnection(raw(0), "r+") download_blob(cont, "iris.rds", con) unserialize(con) # copy from a public URL: Iris data from UCI machine learning repository copy_url_to_blob(cont, "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", "iris.csv") ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.