mlp_teach_bp: Backpropagation (batch) teaching

Description Usage Arguments Value Note References

View source: R/mlp_teach.R

Description

Backpropagation (a teaching algorithm) is a simple steepest descent algorithm for MSE minimisation, in which weights are updated according to (scaled) gradient of MSE.

Usage

1
2
mlp_teach_bp(net, input, output, tol_level, max_epochs, learn_rate = 0.7,
  l2reg = 0, report_freq = 0)

Arguments

net

an object of mlp_net class

input

numeric matrix, each row corresponds to one input vector, the number of columns must be equal to the number of neurons in the network input layer

output

numeric matrix with rows corresponding to expected outputs, the number of columns must be equal to the number of neurons in the network output layer, the number of rows must be equal to the number of input rows

tol_level

numeric value, error (MSE) tolerance level

max_epochs

integer value, maximal number of epochs (iterations)

learn_rate

numeric value, learning rate in the backpropagation algorithm (default 0.7)

l2reg

numeric value, L2 regularization parameter (default 0)

report_freq

integer value, progress report frequency, if set to 0 no information is printed on the console (this is the default)

Value

Two-element list, the first field (net) contains the trained network, the second (mse) - the learning history (MSE in consecutive epochs).

Note

The name ‘backpropagation’ is commonly used in two contexts, which sometimes causes confusion. Firstly, backpropagation can be understood as an efficient algorithm for MSE gradient computation that was first described by Bryson and Ho in the '60s of 20th century and reinvented in the '80s. Secondly, the name backpropagation is (more often) used to refer to the steepest descent method that uses gradient of MSE computed efficiently by means of the aforementioned algorithm. This ambiguity is probably caused by the fact that in practically all neural network implementations, the derivatives of MSE and weight updates are computed simultaneously in one backward pass (from output layer to input layer).

References

A.E. Bryson and Y.C. Ho. Applied optimal control: optimization, estimation, and control. Blaisdell book in the pure and applied sciences. Blaisdell Pub. Co., 1969.

David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, October 1986.


FCNN4R documentation built on May 29, 2017, 4:26 p.m.