library(macrosheds)
remotes::install_github('MacroSHEDS/macrosheds') library(macrosheds) help(package = 'macrosheds')
Let's say we're looking for watersheds with pre-1970 data records that are still in operation.
?ms_load_sites ms_sites <- ms_load_sites() colnames(ms_sites) # sites are grouped into domains, which are grouped into networks ms_sites %>% filter(site_type == 'stream_gauge', first_record < as.Date('1970-01-01'), last_record > as.Date('2020-01-01')) %>% select(domain, site_code, first_record, last_record) %>% print(n = 18)
And maybe we're interested in nitrate.
ms_load_variables(var_set = 'timeseries') %>% filter(grepl('nitrate', variable_name, ignore.case = TRUE)) %>% distinct(variable_code, variable_name) ms_load_variables(var_set = 'timeseries_by_site') %>% filter(variable_code == 'NO3_N', #MacroSheds doesn't store NO3 data, but you can convert NO3-N using ms_conversions() observations > 1000) %>% distinct(domain) %>% pull()
HBEF and H. J. Andrews look like good candidates. Next, retrieve discharge and nitrate data for one site from each domain.
You can have as many macrosheds_root
directories as you like, but they should only be used for MacroSheds data.
my_domains <- c('hbef', 'hjandrews') my_sites <- c('w1', 'GSWS10') my_ms_dir <- './example/data' options(timeout = 300) #default 60 seconds might not be enough if your connection is slow ms_download_core_data( macrosheds_root = my_ms_dir, domains = my_domains #use "all" to retrieve all networks, domains, and sites. ) q <- ms_load_product( my_ms_dir, prodname = 'discharge', site_codes = my_sites ) no3 <- ms_load_product( my_ms_dir, prodname = 'stream_chemistry', filter_vars = 'NO3_N', site_codes = my_sites )
This yields daily nitrate-N fluxes and annual nitrate-N loads (i.e. cumulative fluxes) for stream water.
Note that you can also retrieve these quantities directly using ms_load_product
.
no3_flux <- ms_calc_flux(no3, q) #you can convert flux from kg/ha/d to kg/d (and back) ms_undo_scale_flux_by_area(no3_flux)
no3_load_out <- ms_calc_load(filter(no3, ms_interp == 0), filter(q, ms_interp == 0)) no3_load <- filter(no3_load_out$load, ms_recommended) plot(no3_load$water_year[no3_load$site_code == 'w1'], no3_load$load[no3_load$site_code == 'w1'], type = 'l', xlab = '', ylab = 'NO3-N load (kg/ha/yr)', lwd = 2) lines(no3_load$water_year[no3_load$site_code == 'GSWS10'], no3_load$load[no3_load$site_code == 'GSWS10'], type = 'l', col = 'blue', lwd = 2) legend('topright', legend = c('w1', 'GSWS10'), col = c('black', 'blue'), lwd = 2, bty = 'n')
As you can see from the warning, not all load methods are appropriate for every use case. Consider
the ms_recommended
column and check the docs (?ms_calc_load
) and these references for more information:
Note that you can use ms_calc_flux
to compute daily precipitation solute
fluxes as well. Many domains provide their own gauged precipitation data, but you
will often get better coverage and across-site comparison by using a gridded
product like PRISM. Here's how to compute daily SO4-S influxes using PRISM precip.
ms_load_variables(var_set = 'ws_attr') %>% filter(data_class == 'climate', grepl('precip', variable_name)) ms_download_ws_attr( macrosheds_root = my_ms_dir, dataset = 'time series' #watershed attribute (spatially-explicit) time series. This file is over 1.5 GiB ) precip <- ms_load_product( my_ms_dir, prodname = 'ws_attr_timeseries:climate', filter_vars = 'precip_median', site_codes = my_sites, warn = FALSE ) so4 <- ms_load_product( my_ms_dir, prodname = 'stream_chemistry', filter_vars = 'SO4_S', site_codes = my_sites ) %>% select(-ms_status, -ms_interp) so4_precip_flux <- ms_calc_flux(so4, precip)
attrib <- ms_generate_attribution(bind_rows(so4, no3, precip, q)) cat(head(attrib$bibliography, n = 2), '...\n')
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.