Description Usage Arguments Value References Examples
CA starts by spatially aggregating high-resolution gridded observations up to the scale of a GCM. Then it proceeds to bias correcting the GCM based on those observations. Finally, it conducts the search for temporal analogues (which is the most expensive part of the operation). This involves taking each timestep in the GCM and searching for the top 30 closest timesteps (for some function of "close") in the gridded observations. For each of the 30 closest "analogue" timesteps, CA records the integer number of the timestep and a weight for each of the analogues. These are all saved in output.file.
1 | ca.netcdf.wrapper(gcm.file, obs.file, varname = "tasmax")
|
gcm.file |
Filename of GCM simulations |
obs.file |
Filename of high-res gridded historical observations |
varname |
Name of the NetCDF variable to downscale (e.g. 'tasmax') |
A list object with two values: 'indices' and 'weights', each of which is a vector with 30 items
Maurer, E. P., Hidalgo, H. G., Das, T., Dettinger, M. D., & Cayan, D. R. (2010). The utility of daily large-scale climate data in the assessment of climate change impacts on daily streamflow in California. Hydrology and Earth System Sciences, 14(6), 1125-1138.
1 2 3 4 5 6 7 | ## Not run:
options(
calibration.end=as.POSIXct('1972-12-31', tz='GMT')
)
analogues <- ClimDown::ca.netcdf.wrapper('./tiny_gcm.nc', './tiny_obs.nc')
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.