train_network: Train a Network using Stochastic Gradient Descent

Description Usage Arguments Details Value Author(s)

View source: R/train_network.R

Description

This function updates the network's weights and biases by using the Stochastic Gradient Descent, a faster approach to the regular Gradient Descent. The idea is that, for every iteration, training data is shuffled and a number of batches are generated according to batch_size. Then, the cost function is evaluated for every batch, thus updating the weights and biases. This process is repeated for the number of iterations.

Usage

1
2
3
4
5
6
7
8
9
train_network(
  network,
  train_x,
  train_y,
  batch_size,
  iters = 1000,
  eta = 0.1,
  verbose = TRUE
)

Arguments

network

A kappnet network object. (list)

train_x

A matrix with inputs for the training examples. Its dimensions are expected to be (input_size, number_of_observations)(matrix).

train_y

A matrix with outputs for the training examples. Its dimensions are expected to be (output_size, number_of_observations)(matrix).

batch_size

The batch size. Note that batch size does not mean input size, but number of examples or observations that are used to calculate a round of weights and biases adjustment. (integer)

iters

Number of iterations. Default: 1000 (integer)

eta

The learning rate for the gradient descent. Default: 0.1 (numeric)

verbose

Whether or not the algorithm should print out information during its run time. Default: TRUE (logical)

Details

Note that train_x and train_y must be "aligned", that is, the first column of train_x should relate to the first column of train_y, given that the columns represent observations.

Value

A trained network.

Author(s)

Eduardo Kapp


eduardokapp/r_neural_network documentation built on Dec. 20, 2021, 3:21 a.m.