query_exec | R Documentation |
Please use bq_project_query()
instead.
query_exec(
query,
project,
destination_table = NULL,
default_dataset = NULL,
page_size = 10000,
max_pages = 10,
warn = TRUE,
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_EMPTY",
use_legacy_sql = TRUE,
quiet = getOption("bigrquery.quiet"),
...
)
query |
SQL query string |
destination_table |
(optional) destination table for large queries,
either as a string in the format used by BigQuery, or as a list with
|
default_dataset |
(optional) default dataset for any table references in
|
page_size |
Number of items per page. |
max_pages |
Maximum number of pages to retrieve. Use |
warn |
If |
create_disposition |
behavior for table creation.
defaults to |
write_disposition |
behavior for writing data.
defaults to |
use_legacy_sql |
(optional) set to |
... |
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase. |
Google documentation describing asynchronous queries: https://cloud.google.com/bigquery/docs/running-queries
Google documentation for handling large results: https://cloud.google.com/bigquery/docs/writing-results
## Not run:
project <- bq_test_project() # put your project ID here
sql <- "SELECT year, month, day, weight_pounds FROM [publicdata:samples.natality] LIMIT 5"
query_exec(sql, project = project)
# Put the results in a table you own (which uses project by default)
query_exec(sql, project = project, destination_table = "my_dataset.results")
# Use a default dataset for the query
sql <- "SELECT year, month, day, weight_pounds FROM natality LIMIT 5"
query_exec(sql, project = project, default_dataset = "publicdata:samples")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.