list_azure_files | R Documentation |
Upload, download, or delete a file; list files in a directory; create or delete directories; check file existence.
list_azure_files(share, dir = "/", info = c("all", "name"), prefix = NULL, recursive = FALSE) upload_azure_file(share, src, dest = basename(src), create_dir = FALSE, blocksize = 2^22, put_md5 = FALSE, use_azcopy = FALSE) multiupload_azure_file(share, src, dest, recursive = FALSE, create_dir = recursive, blocksize = 2^22, put_md5 = FALSE, use_azcopy = FALSE, max_concurrent_transfers = 10) download_azure_file(share, src, dest = basename(src), blocksize = 2^22, overwrite = FALSE, check_md5 = FALSE, use_azcopy = FALSE) multidownload_azure_file(share, src, dest, recursive = FALSE, blocksize = 2^22, overwrite = FALSE, check_md5 = FALSE, use_azcopy = FALSE, max_concurrent_transfers = 10) delete_azure_file(share, file, confirm = TRUE) create_azure_dir(share, dir, recursive = FALSE) delete_azure_dir(share, dir, recursive = FALSE, confirm = TRUE) azure_file_exists(share, file) azure_dir_exists(share, dir)
share |
A file share object. |
dir, file |
A string naming a directory or file respectively. |
info |
Whether to return names only, or all information in a directory listing. |
prefix |
For |
recursive |
For the multiupload/download functions, whether to recursively transfer files in subdirectories. For |
src, dest |
The source and destination files for uploading and downloading. See 'Details' below. |
create_dir |
For the uploading functions, whether to create the destination directory if it doesn't exist. Again for the file storage API this can be slow, hence is optional. |
blocksize |
The number of bytes to upload/download per HTTP(S) request. |
put_md5 |
For uploading, whether to compute the MD5 hash of the file(s). This will be stored as part of the file's properties. |
use_azcopy |
Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R. |
max_concurrent_transfers |
For |
overwrite |
When downloading, whether to overwrite an existing destination file. |
check_md5 |
For downloading, whether to verify the MD5 hash of the downloaded file(s). This requires that the file's |
confirm |
Whether to ask for confirmation on deleting a file or directory. |
upload_azure_file
and download_azure_file
are the workhorse file transfer functions for file storage. They each take as inputs a single filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, src
can be a textConnection or rawConnection object; and for downloading, dest
can be NULL or a rawConnection
object. If dest
is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.
multiupload_azure_file
and multidownload_azure_file
are functions for uploading and downloading multiple files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:
Both src
and dest
can be vectors naming the individual source and destination pathnames.
The src
argument can be a wildcard pattern expanding to one or more files, with dest
naming a destination directory. In this case, if recursive
is true, the file transfer will replicate the source directory structure at the destination.
upload_azure_file
and download_azure_file
can display a progress bar to track the file transfer. You can control whether to display this with options(azure_storage_progress_bar=TRUE|FALSE)
; the default is TRUE.
azure_file_exists
and azure_dir_exists
test for the existence of a file and directory, respectively.
upload_azure_file
and download_azure_file
have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the use_azcopy
argument to TRUE.
Note that AzCopy only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.
For list_azure_files
, if info="name"
, a vector of file/directory names. If info="all"
, a data frame giving the file size and whether each object is a file or directory.
For download_azure_file
, if dest=NULL
, the contents of the downloaded file as a raw vector.
For azure_file_exists
, either TRUE or FALSE.
file_share, az_storage, storage_download, call_azcopy
## Not run: share <- file_share("https://mystorage.file.core.windows.net/myshare", key="access_key") list_azure_files(share, "/") list_azure_files(share, "/", recursive=TRUE) create_azure_dir(share, "/newdir") upload_azure_file(share, "~/bigfile.zip", dest="/newdir/bigfile.zip") download_azure_file(share, "/newdir/bigfile.zip", dest="~/bigfile_downloaded.zip") delete_azure_file(share, "/newdir/bigfile.zip") delete_azure_dir(share, "/newdir") # uploading/downloading multiple files at once multiupload_azure_file(share, "/data/logfiles/*.zip") multidownload_azure_file(share, "/monthly/jan*.*", "/data/january") # you can also pass a vector of file/pathnames as the source and destination src <- c("file1.csv", "file2.csv", "file3.csv") dest <- paste0("uploaded_", src) multiupload_azure_file(share, src, dest) # uploading serialized R objects via connections json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE) con <- textConnection(json) upload_azure_file(share, con, "iris.json") rds <- serialize(iris, NULL) con <- rawConnection(rds) upload_azure_file(share, con, "iris.rds") # downloading files into memory: as a raw vector, and via a connection rawvec <- download_azure_file(share, "iris.json", NULL) rawToChar(rawvec) con <- rawConnection(raw(0), "r+") download_azure_file(share, "iris.rds", con) unserialize(con) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.