ReduceLROnPlateau: Reduce learning rate when a metric has stopped improving.

Description Usage Arguments Author(s) References See Also Examples

View source: R/callbacks.R

Description

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This callback monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

Usage

1
2
3
ReduceLROnPlateau(monitor = "val_loss", factor = 0.1, patience = 10,
  verbose = 0, mode = "auto", epsilon = 1e-04, cooldown = 0,
  min_lr = 0)

Arguments

monitor

quantity to be monitored.

factor

factor by which the learning rate will be reduced. new_lr = lr * factor

patience

number of epochs with no improvement after which learning rate will be reduced.

verbose

int. 0: quiet, 1: update messages.

mode

one of auto, min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity.

epsilon

threshold for measuring the new optimum, to only focus on significant changes.

cooldown

number of epochs to wait before resuming normal operation after lr has been reduced.

min_lr

lower bound on the learning rate.

Author(s)

Taylor B. Arnold, taylor.arnold@acm.org

References

Chollet, Francois. 2015. Keras: Deep Learning library for Theano and TensorFlow.

See Also

Other callbacks: CSVLogger, EarlyStopping, ModelCheckpoint, TensorBoard

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
if(keras_available()) {
  X_train <- matrix(rnorm(100 * 10), nrow = 100)
  Y_train <- to_categorical(matrix(sample(0:2, 100, TRUE), ncol = 1), 3)

  mod <- Sequential()
  mod$add(Dense(units = 50, input_shape = dim(X_train)[2]))
  mod$add(Activation("relu"))
  mod$add(Dense(units = 3))
  mod$add(Activation("softmax"))
  keras_compile(mod,  loss = 'categorical_crossentropy', optimizer = RMSprop())

  callbacks <- list(CSVLogger(tempfile()),
                    EarlyStopping(),
                    ReduceLROnPlateau(),
                    TensorBoard(tempfile()))

  keras_fit(mod, X_train, Y_train, batch_size = 32, epochs = 5,
            verbose = 0, callbacks = callbacks, validation_split = 0.2)
}

YTLogos/kerasR documentation built on May 19, 2019, 4:04 p.m.