This functions uses the Adam algorithm to find the minimum of a (multi-) dimensional mathematical function. The combination considers both, the average of the previous gradients (Momentum Optimizer) and the average of the square gradients (RMS Prop), both under exponential decay.
1 2 3 4 5 6 7 8 9 |
f |
a (multi-) dimensional function to be eptimized. |
x0 |
the starting point of the optimization. |
max.iter |
the maximum number of iterations performed in the optimization. |
step.size |
the step size (sometimes referred to as 'learn-rate') of the optimization. |
phi1 |
decay rate for RMS Prop term, i.e. the squared gradients. |
phi2 |
decay rate for Momentum term, i.e. the previous gradients. |
stop.grad |
the stop-criterion for the gradient change. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.