Description Usage Arguments Value Note References
Backpropagation (a teaching algorithm) is a simple steepest descent algorithm for MSE minimisation, in which weights are updated according to (scaled) gradient of MSE.
1 2 | mlp_teach_bp(net, input, output, tol_level, max_epochs, learn_rate = 0.7,
l2reg = 0, report_freq = 0)
|
net |
an object of |
input |
numeric matrix, each row corresponds to one input vector, the number of columns must be equal to the number of neurons in the network input layer |
output |
numeric matrix with rows corresponding to expected outputs, the number of columns must be equal to the number of neurons in the network output layer, the number of rows must be equal to the number of input rows |
tol_level |
numeric value, error (MSE) tolerance level |
max_epochs |
integer value, maximal number of epochs (iterations) |
learn_rate |
numeric value, learning rate in the backpropagation algorithm (default 0.7) |
l2reg |
numeric value, L2 regularization parameter (default 0) |
report_freq |
integer value, progress report frequency, if set to 0 no information is printed on the console (this is the default) |
Two-element list, the first field (net
) contains the trained network,
the second (mse
) - the learning history (MSE in consecutive epochs).
The name ‘backpropagation’ is commonly used in two contexts, which sometimes causes confusion. Firstly, backpropagation can be understood as an efficient algorithm for MSE gradient computation that was first described by Bryson and Ho in the '60s of 20th century and reinvented in the '80s. Secondly, the name backpropagation is (more often) used to refer to the steepest descent method that uses gradient of MSE computed efficiently by means of the aforementioned algorithm. This ambiguity is probably caused by the fact that in practically all neural network implementations, the derivatives of MSE and weight updates are computed simultaneously in one backward pass (from output layer to input layer).
A.E. Bryson and Y.C. Ho. Applied optimal control: optimization, estimation, and control. Blaisdell book in the pure and applied sciences. Blaisdell Pub. Co., 1969.
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, October 1986.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.