dnnet.backprop.r | R Documentation |
Back Propagation
dnnet.backprop.r( n.hidden, w.ini, load.param, initial.param, x, y, w, valid, x.valid, y.valid, w.valid, activate, activate_, n.epoch, n.batch, model.type, learning.rate, l1.reg, l2.reg, early.stop, early.stop.det, learning.rate.adaptive, rho, epsilon, beta1, beta2, loss.f )
n.hidden |
A numeric vector for numbers of nodes for all hidden layers. |
w.ini |
Initial weight parameter. |
load.param |
Whether initial parameters are loaded into the model. |
initial.param |
The initial parameters to be loaded. |
x |
x |
y |
y |
w |
w |
valid |
If exists the validation set |
x.valid |
x-valid |
y.valid |
y-valid |
w.valid |
w-valid |
activate |
Activation Function. |
activate_ |
The forst derivative of the activation function. |
n.epoch |
Maximum number of epochs. |
n.batch |
Batch size for batch gradient descent. |
model.type |
Type of model. |
learning.rate |
Initial learning rate, 0.001 by default; If "adam" is chosen as an adaptive learning rate adjustment method, 0.1 by defalut. |
l1.reg |
weight for l1 regularization, optional. |
l2.reg |
weight for l2 regularization, optional. |
early.stop |
Indicate whether early stop is used (only if there exists a validation set). |
early.stop.det |
Number of epochs of increasing loss to determine the early stop. |
learning.rate.adaptive |
Adaptive learning rate adjustment methods, one of the following, "constant", "adadelta", "adagrad", "momentum", "adam". |
rho |
A parameter used in momentum. |
epsilon |
A parameter used in Adagrad and Adam. |
beta1 |
A parameter used in Adam. |
beta2 |
A parameter used in Adam. |
loss.f |
Loss function of choice. |
Returns a list
of results to dnnet
.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.