knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" )
The goal of futureScenario is to ...
And the development version from GitHub with:
# install.packages("devtools") devtools::install_github("Shaunson26/futureScenario")
library(futureScenario)
library(devtools) load_all() options(width = 250)
A flattened and minified GNAF address dataset with meshblock 2016 boundary ?gnaf
head(gnaf)
return_address_coords()
is used to query gnaf
. Currently with exact matches only.
If any query files are missing, there will be a wildcard used for that those fields
# An exact match return_address_coords(street_number = 2, street_name = 'IVY STREET', locality = 'DARLINGTON', postcode = '2008') # missing street_name returns all street_names matching the rest of the query return_address_coords(street_number = 2, locality = 'DARLINGTON', postcode = '2008')
https://www.climatechangeinaustralia.gov.au/en/obtain-data/download-datasets/
Huge raster data are available using NetcdfSubset REST API. Datasets are split by:
Within each dataset, data selection parameters include:
The object csiro_catalog (R/csiro_catalog.R)
is a named list to help select the dataset for URL building
csiro_catalog$variable
csiro_catalog$year_range
csiro_catalog$model
csiro_catalog$rcp
csiro_catalog$filename
We can build the queries using create_dataset_url()
which usesv csiro_catalog
, some hardcoded URLs (wihtin the function) and the package httr2
.
create_dataset_url(variable = csiro_catalog$variable$`Rainfall_(Precipitation)`, model = csiro_catalog$model$`NorESM1-M`, rcp = csiro_catalog$rcp$rcp45, year_range = csiro_catalog$year_range$`2016-2045`)
This function is used within download_netcdf_subset()
along with API query parameters to
download a dataset netcdf4 file to a temporary location(the path is returned by the function,
{randomChars}_{variable}_{model}_{rcp}_{date_range}.nc
). Of note is the coordinates requested:
either lat
/lon
or a bounding box bbox
can be used. A NSW bounding box nsw_bbox
is shipped
with the package. Also two download methods exist - using download.file()
or writeBin(body)
from httr2
. The former is quicker, but seems to fail often. httr2
methods are more
polite? Or the CSIRO server is under load when running these?
# Get lat/lon addr <- return_address_coords(street_number = 2, street_name = 'IVY STREET', locality = 'DARLINGTON', postcode = '2008') # download file downloaded_file_path <- download_netcdf_subset(variable = csiro_catalog$variable$`Rainfall_(Precipitation)`, model = csiro_catalog$model$`NorESM1-M`, rcp = csiro_catalog$rcp$rcp85, year_range = csiro_catalog$year_range$`2016-2045`, lat = addr$LATITUDE, lon = addr$LONGITUDE, #bbox = nsw_bbox, date_start = '2016-01-01', date_end = '2016-01-03', date_step = 2, method = 'httr2') raster_data <- stars::read_ncdf(downloaded_file_path, var = get_var_from_path(downloaded_file_path)) raster_data raster_data %>% tibble::as_tibble()
Create a function with set parameters for model, rcp, years, dates and vary the variable parameter.
bulk_download <- function(x, address_df){ download_netcdf_subset(variable = x, model = csiro_catalog$model$`NorESM1-M`, rcp = csiro_catalog$rcp$rcp85, year_range = csiro_catalog$year_range$`2016-2045`, lat = address_df$LATITUDE, lon = address_df$LONGITUDE, date_start = '2016-01-01', date_end = '2045-12-31', date_step = 365, method = 'httr2') } address <- return_address_coords(locality = 'PENRITH') %>% dplyr::slice(sample(dplyr::n(), 1)) # Download separately in case of connection errors rsds <- bulk_download(csiro_catalog$variable$Solar_Radiation, address_df = address) hurs <- bulk_download(csiro_catalog$variable$Relative_Humidity, address_df = address) pr <- bulk_download(csiro_catalog$variable$`Rainfall_(Precipitation)`, address_df = address) tasmin <- bulk_download(csiro_catalog$variable$Minimum_Temperature, address_df = address) tas <- bulk_download(csiro_catalog$variable$Mean_Temperature, address_df = address) tasmax <- bulk_download(csiro_catalog$variable$Maximum_Temperature, address_df = address) wvap <- bulk_download(csiro_catalog$variable$Evaporation, address_df = address) library(stars) library(dplyr) library(ggplot2) stars_list <- list(hurs, pr, tasmin, tas, tasmax, wvap) %>% lapply(., function(x){ # extract variable from filename var = strsplit(basename(x), split = '_')[[1]][2] # import stars::read_ncdf(x, var = var) }) # one dataset has weird time class class_PCICt <- function(x){ class(x)[1] == 'PCICt' } purrr::map_df(zz, function(x){ x %>% as_tibble() %>% select(everything(), value = 4) %>% mutate(across(where(class_PCICt), as.character), time = as.Date(time), value = as.numeric(value), var = names(x)) %>% # may get multiple grids for a give lat/lon so average to 1 value group_by(time) %>% summarise(value = mean(value)) %>% ungroup() })
call to https://datasets.seed.nsw.gov.au/dataset/...
hvi <- return_address_coords(street_number = 2, street_name = 'IVY STREET', locality = 'DARLINGTON', postcode = '2008') %>% join_sa1() %>% dplyr::pull(SA1_MAINCODE_2016) %>% get_heat_vulnerability_index(sa1 = .) %>% # JSON as a list map_heat_vulnerability_index() hvi %>% do.call(rbind.data.frame, .)
call to https://datasets.seed.nsw.gov.au/dataset/...
uvca <- return_address_coords(street_number = 2, street_name = 'IVY STREET', locality = 'DARLINGTON', postcode = '2008') %>% dplyr::pull(MB_2016_CODE) %>% get_urban_vegetation_cover_all(mb = .) %>% # JSON as a list map_urban_vegetation_cover_all() uvca
Shiny apps for exploration and other purposes
run_gnaf_leaflet_shiny()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.