train: Continue training of a Neural Network

Description Usage Arguments Details Value References

View source: R/interface.R

Description

Continue training of a neural network object returned by neuralnetwork() or autoencoder()

Usage

1
2
train(object, X, Y = NULL, n.epochs = 20, batch.size = 32,
  drop.last = TRUE, val.prop = 0.1, verbose = TRUE)

Arguments

object

object of class ANN produced by neuralnetwork() or autoencoder()

X

matrix with explanatory variables

Y

matrix with dependent variables. Not required if object is an autoencoder

n.epochs

the number of epochs to train. This parameter largely determines the training time (one epoch is a single iteration through the training data).

batch.size

the number of observations to use in each batch. Batch learning is computationally faster than stochastic gradient descent. However, large batches might not result in optimal learning, see Efficient Backprop by Le Cun for details.

drop.last

logical. Only applicable if the size of the training set is not perfectly devisible by the batch size. Determines if the last chosen observations should be discarded (in the current epoch) or should constitute a smaller batch. Note that a smaller batch leads to a noisier approximation of the gradient.

val.prop

proportion of training data to use for tracking the loss on a validation set during training. Useful for assessing the training process and identifying possible overfitting. Set to zero for only tracking the loss on the training data.

verbose

logical indicating if additional information should be printed

Details

A new validation set is randomly chosen. This can result in irregular jumps in the plot given by plot.ANN().

Value

An ANN object. Use function plot(<object>) to assess loss on training and optionally validation data during training process. Use function predict(<object>, <newdata>) for prediction.

References

LeCun, Yann A., et al. "Efficient backprop." Neural networks: Tricks of the trade. Springer Berlin Heidelberg, 2012. 9-48.


bflammers/ANN2 documentation built on Oct. 27, 2018, 12:17 a.m.