Description Usage Arguments Details Value Author(s)
View source: R/train_network.R
This function updates the network's weights and biases by using the Stochastic Gradient Descent, a faster approach to the regular Gradient Descent. The idea is that, for every iteration, training data is shuffled and a number of batches are generated according to batch_size. Then, the cost function is evaluated for every batch, thus updating the weights and biases. This process is repeated for the number of iterations.
1 2 3 4 5 6 7 8 9 | train_network(
network,
train_x,
train_y,
batch_size,
iters = 1000,
eta = 0.1,
verbose = TRUE
)
|
network |
A kappnet network object. (list) |
train_x |
A matrix with inputs for the training examples. Its dimensions are expected to be (input_size, number_of_observations)(matrix). |
train_y |
A matrix with outputs for the training examples. Its dimensions are expected to be (output_size, number_of_observations)(matrix). |
batch_size |
The batch size. Note that batch size does not mean input size, but number of examples or observations that are used to calculate a round of weights and biases adjustment. (integer) |
iters |
Number of iterations. Default: 1000 (integer) |
eta |
The learning rate for the gradient descent. Default: 0.1 (numeric) |
verbose |
Whether or not the algorithm should print out information during its run time. Default: TRUE (logical) |
Note that train_x and train_y must be "aligned", that is, the first column of train_x should relate to the first column of train_y, given that the columns represent observations.
A trained network.
Eduardo Kapp
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.