downscaleCV.keras: Downscale climate data and reconstruct the temporal serie by...

View source: R/downscaleCV.keras.R

downscaleCV.kerasR Documentation

Downscale climate data and reconstruct the temporal serie by splitting the data following a user-defined scheme

Description

Downscale climate data and reconstruct the temporal serie by splitting the data following a user-defined scheme. The statistical downscaling methods currently implemented are: analogs, generalized linear models (GLM) and Neural Networks (NN).

Usage

downscaleCV.keras(
  x,
  y,
  model,
  MC = NULL,
  sampling.strategy = "kfold.chronological",
  folds = 4,
  scaleGrid.args = NULL,
  prepareData.keras.args = NULL,
  compile.args = NULL,
  fit.args = NULL,
  loss = NULL,
  binarySerie = FALSE,
  condition = NULL,
  threshold = NULL
)

Arguments

x

The input grid (admits both single and multigrid, see makeMultiGrid). It should be an object as returned by loadeR.

y

The observations dataset. It should be an object as returned by loadeR.

model

A keras sequential or functional model.

MC

Currently on implementation. A numeric value, default to NULL. The number of Monte-Carlo samples in case the model is a bayesian neural network (note that any network containing dropout is equivalent mathematically to a bayesian neural network).

sampling.strategy

Specifies a sampling strategy to define the training and test subsets. Possible values are "kfold.chronological" (the default), "kfold.random", "leave-one-year-out" and NULL. The sampling.strategy choices are next described:

  • "kfold.random" creates the number of folds indicated in the folds argument by randomly sampling the entries along the time dimension.

  • "kfold.chronological" is similar to "kfold.random", but the sampling is performed in ascending order along the time dimension.

  • "leave-one-year-out". This scheme performs a leave-one-year-out cross validation. It is equivalent to introduce in the argument folds a list of all years one by one.

  • NULL. The folds are specified by the user in the function parameter folds.

The first two choices will be controlled by the argument folds (see below)

folds

This arguments controls the number of folds, or how these folds are created (ignored if sampling.strategy = "leave-one-year-out"). If it is given as a fraction in the range (0-1), it splits the data in two subsets, one for training and one for testing, being the given value the fraction of the data used for training (i.e., 0.75 will split the data so that 75% of the instances are used for training, and the remaining 25% for testing). In case it is an integer value (the default, which is 4), it sets the number of folds in which the data will be split (e.g., folds = 10 for the classical 10-fold cross validation). Alternatively, this argument can be passed as a list, each element of the list being a vector of years to be included in each fold (See examples).

scaleGrid.args

A list of the parameters related to scale grids. This parameter calls the function scaleGrid. See the function definition for details on the parameters accepted.

prepareData.keras.args

A list with the arguments of the prepareData function. Please refer to prepareData help for more details about this parameter.

compile.args

A list of the arguments passed to the compile keras function, or equivalently to the compile.args parameter that appears in the downscaleTrain.keras function.

fit.args

A list of the arguments passed to the fit keras function, or equivalently to the fit.args parameter that appears in the downscaleTrain.keras function.

loss

Default to NULL. Otherwise a string indicating the loss function used to train the model. This is only relevant where we have used the 2 custom loss functions of this library: "gaussianLoss" or "bernouilliGammaLoss"

binarySerie

A logic value, default to FALSE. Indicate whether to conver the predicted probabilities of rain to a binary value by adjusting the frequency of rainy days to that observed in the train period. Note that this is only valid when our aim is to downscale precipitation and we set the "loss" parameter to the custom function "bernouilliGammaLoss". We need to define what we consider as rainy day, see the parameters condition and threshold to set these values.

condition

Inequality operator to be applied considering the given threshold. "GT" = greater than the value of threshold, "GE" = greater or equal, "LT" = lower than, "LE" = lower or equal than. Values that accomplish the condition turn to 1 whereas the others turn to 0.

threshold

An integer. Threshold used as reference for the condition. Default is NULL.

Details

The function relies on prepareData.keras, prepareNewData.keras, downscaleTrain.keras, and downscalePredict.keras. For more information please visit these functions. It is envisaged to allow for a flexible fine-tuning of the cross-validation scheme. It uses internally the transformeR helper dataSplit for flexible data folding. Note that the indices for data splitting are obtained using getYearsAsINDEX when needed (e.g. in leave-one-year-out cross validation), thus adequately handling potential inconsistencies in year selection when dealing with year-crossing seasons (e.g. DJF).

If the variable to downscale is the precipitation and it is a binary variable, then two temporal series will be returned:

  1. The temporal serie with binary values filtered by a threshold adjusted by the train dataset, see binaryGrid for more details.

  2. The temporal serie with the results obtained by the downscaling, without binary conversion process.

Please note that Keras do not handle missing data and these are removed previous to infer the model.

According to the concept of cross-validation, a particular year should not appear in more than one fold (i.e., folds should constitute disjoint sets). For example, the choice fold =list(c(1988,1989), c(1989, 1990)) will raise an error, as 1989 appears in more than one fold.

Value

The reconstructed downscaled temporal serie.

Author(s)

J. Bano-Medina

Examples


# Loading data
require(climate4R.datasets)
require(downscaleR)
require(transformeR)
data("VALUE_Iberia_tas")
y <- VALUE_Iberia_tas
data("NCEP_Iberia_hus850", "NCEP_Iberia_psl", "NCEP_Iberia_ta850")
x <- makeMultiGrid(NCEP_Iberia_hus850, NCEP_Iberia_psl, NCEP_Iberia_ta850)
# mse
inputs <- layer_input(shape = c(getShape(x,"lat"),getShape(x,"lon"),getShape(x,"var")))
hidden <- inputs %>% 
  layer_conv_2d(filters = 25, kernel_size = c(3,3), activation = 'relu') %>%  
  layer_conv_2d(filters = 10, kernel_size = c(3,3), activation = 'relu') %>% 
  layer_flatten() %>% 
  layer_dense(units = 10, activation = "relu")
outputs <- layer_dense(hidden,units = getShape(y,"loc"))
model <- keras_model(inputs = inputs, outputs = outputs)
pred <- downscaleCV.keras(x, y, model,
           sampling.strategy = "kfold.chronological", folds = 4, 
           scaleGrid.args = list(type = "standardize"),
           prepareData.keras.args = list(first.connection = "conv",
                                         last.connection = "dense",
                                         channels = "last"),
           compile.args = list(loss = "mse", 
                          optimizer = optimizer_adam()),
           fit.args = list(batch_size = 100, epochs = 100, validation_split = 0.1))
# gaussianLoss 
inputs <- layer_input(shape = c(getShape(x,"lat"),getShape(x,"lon"),getShape(x,"var")))
hidden <- inputs %>% 
  layer_conv_2d(filters = 25, kernel_size = c(3,3), activation = 'relu') %>%  
  layer_conv_2d(filters = 10, kernel_size = c(3,3), activation = 'relu') %>% 
  layer_flatten() %>% 
  layer_dense(units = 10, activation = "relu")
outputs1 <- layer_dense(hidden,units = getShape(y,"loc"))
outputs2 <- layer_dense(hidden,units = getShape(y,"loc"))
outputs <- layer_concatenate(list(outputs1,outputs2))
model <- keras_model(inputs = inputs, outputs = outputs)
pred <- downscaleCV.keras(x, y, model,
         sampling.strategy = "kfold.chronological", folds = 2, 
         scaleGrid.args = list(type = "standardize"),
         prepareData.keras.args = list(first.connection = "conv",
                                       last.connection = "dense",
                                       channels = "last"),
         compile.args = list(loss = gaussianLoss(last.connection = "dense"), 
                                              optimizer = optimizer_adam()),
         fit.args = list(batch_size = 100, epochs = 100, validation_split = 0.1),
                    loss = "gaussianLoss")

# bernouilliGammaLoss
data("VALUE_Iberia_pr")
y <- VALUE_Iberia_pr
inputs <- layer_input(shape = c(getShape(x,"lat"),getShape(x,"lon"),getShape(x,"var")))
hidden <- inputs %>% 
  layer_conv_2d(filters = 25, kernel_size = c(3,3), activation = 'relu') %>%  
  layer_conv_2d(filters = 10, kernel_size = c(3,3), activation = 'relu') %>% 
  layer_flatten() %>% 
  layer_dense(units = 10, activation = "relu")
outputs1 <- layer_dense(hidden,units = getShape(y,"loc"), activation = "sigmoid")
outputs2 <- layer_dense(hidden,units = getShape(y,"loc"))
outputs3 <- layer_dense(hidden,units = getShape(y,"loc"))
outputs <- layer_concatenate(list(outputs1,outputs2,outputs3))
model <- keras_model(inputs = inputs, outputs = outputs)
y <- gridArithmetics(y,0.99,operator = "-") %>% binaryGrid("GT",0,partial = TRUE) 
pred <- downscaleCV.keras(x, y, model,
           sampling.strategy = "kfold.chronological", folds = 4, 
           scaleGrid.args = list(type = "standardize"),
           prepareData.keras.args = list(first.connection = "conv",
                                         last.connection = "dense",
                                         channels = "last"),
           compile.args = list(loss = bernouilliGammaLoss(last.connection = "dense"), 
                               optimizer = optimizer_adam()),
           fit.args = list(batch_size = 100, epochs = 100, validation_split = 0.1),
           loss = "bernouilliGammaLoss",
           binarySerie = TRUE,condition = "GE",threshold = 1)
           
# Example with PCs and local predictors
data("VALUE_Iberia_pr")
y <- VALUE_Iberia_pr

inputs <- layer_input(shape = 27)
 hidden <- inputs %>% 
   layer_dense(units = 25, activation = 'relu') %>%  
   layer_dense(units = 10, activation = 'relu') 
 outputs1 <- layer_dense(hidden,units = 1, activation = "sigmoid")
 outputs2 <- layer_dense(hidden,units = 1)
 outputs3 <- layer_dense(hidden,units = 1)
 outputs <- layer_concatenate(list(outputs1,outputs2,outputs3))
 model <- keras_model(inputs = inputs, outputs = outputs)
 
 pred <- lapply(1:getShape(y,"loc"), FUN = function(z) {
   y <- subsetGrid(y,station.id = y$Metadata$station_id[z]) %>% gridArithmetics(0.99,operator = "-") %>% binaryGrid("GT",0,partial = TRUE) 
   pred <- downscaleCV.keras(x, y, model,
                             sampling.strategy = "kfold.chronological", folds = 4, 
                             scaleGrid.args = list(type = "standardize"),
                             prepareData.keras.args = list(first.connection = "dense",
                                                           last.connection = "dense",
                                                           spatial.predictors = list(n.eofs = 15, which.combine = getVarNames(x)),
                                                           local.predictors = list(n = 4, vars = getVarNames(x))),
                             compile.args = list(loss = bernouilliGammaLoss(last.connection = "dense"), 
                                                 optimizer = optimizer_adam()),
                             fit.args = list(batch_size = 100, epochs = 100, validation_split = 0.1),
                             loss = "bernouilliGammaLoss") 
   pred_bin <- binaryGrid(subsetGrid(pred, var = "p"), simulate = TRUE)
   pred_amo <- computeRainfall(log_alpha = subsetGrid(pred, var = "log_alpha"),
                               log_beta = subsetGrid(pred, var = "log_beta"),
                               simulate = TRUE, 
                               bias = 0.99) %>% redim(member = FALSE, loc = TRUE)
   pred <- gridArithmetics(pred_amo,pred_bin)
 }) %>% bindGrid(dimension = "loc") %>% redim(drop = TRUE)
            

SantanderMetGroup/downscaleR.keras documentation built on July 7, 2023, 1:22 p.m.