Description Usage Arguments Details Value scopes Examples
Upload up to 5TB
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | gcs_upload(
  file,
  bucket = gcs_get_global_bucket(),
  type = NULL,
  name = deparse(substitute(file)),
  object_function = NULL,
  object_metadata = NULL,
  predefinedAcl = c("private", "bucketLevel", "authenticatedRead",
    "bucketOwnerFullControl", "bucketOwnerRead", "projectPrivate", "publicRead",
    "default"),
  upload_type = c("simple", "resumable")
)
gcs_upload_set_limit(upload_limit = 5000000L)
 | 
| file | data.frame, list, R object or filepath (character) to upload file | 
| bucket | bucketname you are uploading to | 
| type | MIME type, guessed from file extension if NULL | 
| name | What to call the file once uploaded. Default is the filepath | 
| object_function | If not NULL, a  | 
| object_metadata | Optional metadata for object created via gcs_metadata_object | 
| predefinedAcl | Specify user access to object. Default is 'private'. Set to 'bucketLevel' for buckets with bucket level access enabled. | 
| upload_type | Override automatic decision on upload type | 
| upload_limit | Upload limit in bytes | 
When using object_function it expects a function with two arguments:
input The object you supply in file to write from
output The filename you write to
By default the upload_type will be 'simple' if under 5MB, 'resumable' if over 5MB.  Use gcs_upload_set_limit to modify this boundary - you may want it smaller on slow connections, higher on faster connections.
'Multipart' upload is used if you provide a object_metadata.
If object_function is NULL and file is not a character filepath,
the defaults are:
 file's class is data.frame - write.csv
 file's class is list - toJSON
If object_function is not NULL and file is not a character filepath,
then object_function will be applied to the R object specified
in file before upload. You may want to also use name to ensure the correct
file extension is used e.g. name = 'myobject.feather'
If file or name argument contains folders e.g. /data/file.csv then
the file will be uploaded with the same folder structure e.g. in a /data/ folder.
Use name to override this.
If successful, a metadata object
Requires scopes https://www.googleapis.com/auth/devstorage.read_write
or https://www.googleapis.com/auth/devstorage.full_control
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ## Not run: 
## set global bucket so don't need to keep supplying in future calls
gcs_global_bucket("my-bucket")
## by default will convert dataframes to csv
gcs_upload(mtcars)
## mtcars has been renamed to mtcars.csv
gcs_list_objects()
## to specify the name, use the name argument
gcs_upload(mtcars, name = "my_mtcars.csv")
## when looping, its best to specify the name else it will take
## the deparsed function call e.g. X[[i]]
my_files <- list.files("my_uploads")
lapply(my_files, function(x) gcs_upload(x, name = x))
## you can supply your own function to transform R objects before upload
f <- function(input, output){
  write.csv2(input, file = output)
}
gcs_upload(mtcars, name = "mtcars_csv2.csv", object_function = f)
# upload to a bucket with bucket level ACL set
gcs_upload(mtcars, predefinedAcl = "bucketLevel")
# modify boundary between simple and resumable uploads
# default 5000000L is 5MB
gcs_upload_set_limit(1000000L)
## End(Not run)
 | 
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.