Description Usage Arguments Value References

From Riedmiller (1994): Rprop stands for 'Resilient backpropagation' and is a local adaptive learning scheme. The basic principle of Rprop is to eliminate the harmful influence of the size of the partial derivative on the weight step. As a consequence, only the sign of the derivative is considered to indicate the direction of the weight update. The size of the weight change is exclusively determined by a weight-specific, so called 'update-value'.

This function implements the iRprop+ algorithm from Igel and Huesken (2003).

1 2 3 |

`w` |
the starting parameters for the minimization. |

`f` |
the function to be minimized. If the function value has an attribute called |

`iterlim` |
the maximum number of iterations before the optimization is stopped. |

`print.level` |
the level of printing which is done during optimization. A value of |

`delta.0` |
size of the initial Rprop update-value. |

`delta.min` |
minimum value for the adaptive Rprop update-value. |

`delta.max` |
maximum value for the adaptive Rprop update-value. |

`epsilon` |
step-size used in the finite difference calculation of the gradient. |

`step.tol` |
convergence criterion. Optimization will stop if the change in |

`f.target` |
target value of |

`...` |
further arguments to be passed to |

A list with elements:

`par` |
The best set of parameters found. |

`value` |
The value of |

`gradient` |
An estimate of the gradient at the solution found. |

Igel, C. and M. Huesken, 2003. Empirical evaluation of the improved Rprop learning algorithms. Neurocomputing 50: 105-123.

Riedmiller, M., 1994. Advanced supervised learning in multilayer perceptrons - from backpropagation to adaptive learning techniques. Computer Standards and Interfaces 16(3): 265-278.

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.