rprop: Resilient backpropagation (Rprop) optimization algorithm

Description Usage Arguments Value References

Description

From Riedmiller (1994): Rprop stands for 'Resilient backpropagation' and is a local adaptive learning scheme. The basic principle of Rprop is to eliminate the harmful influence of the size of the partial derivative on the weight step. As a consequence, only the sign of the derivative is considered to indicate the direction of the weight update. The size of the weight change is exclusively determined by a weight-specific, so called 'update-value'.

This function implements the iRprop+ algorithm from Igel and Huesken (2003).

Usage

1
2
3
rprop(w, f, iterlim = 100, print.level = 1, delta.0 = 0.1,
      delta.min = 1e-06, delta.max = 50, epsilon = 1e-08,
      step.tol = 1e-06, f.target = -Inf, ...)

Arguments

w

the starting parameters for the minimization.

f

the function to be minimized. If the function value has an attribute called gradient, this will be used in the calculation of updated parameter values. Otherwise, numerical derivatives will be used.

iterlim

the maximum number of iterations before the optimization is stopped.

print.level

the level of printing which is done during optimization. A value of 0 suppresses any progress reporting, whereas positive values report the value of f and the mean change in f over the previous three iterations.

delta.0

size of the initial Rprop update-value.

delta.min

minimum value for the adaptive Rprop update-value.

delta.max

maximum value for the adaptive Rprop update-value.

epsilon

step-size used in the finite difference calculation of the gradient.

step.tol

convergence criterion. Optimization will stop if the change in f over the previous three iterations falls below this value.

f.target

target value of f. Optimization will stop if f falls below this value.

...

further arguments to be passed to f.

Value

A list with elements:

par

The best set of parameters found.

value

The value of f corresponding to par.

gradient

An estimate of the gradient at the solution found.

References

Igel, C. and M. Huesken, 2003. Empirical evaluation of the improved Rprop learning algorithms. Neurocomputing 50: 105-123.

Riedmiller, M., 1994. Advanced supervised learning in multilayer perceptrons - from backpropagation to adaptive learning techniques. Computer Standards and Interfaces 16(3): 265-278.


CaDENCE documentation built on May 2, 2019, 6:05 a.m.