Updates a deep neural network's parameters using stochastic gradient descent method and batch normalization

Share:

Description

This function finetunes a DArch network using SGD approach

Usage

1
2
3
finetune_SGD_bn(darch, trainData, targetData, learn_rate_weight = exp(-10),
  learn_rate_bias = exp(-10), learn_rate_gamma = exp(-10),
  errorFunc = meanSquareErr, with_BN = T)

Arguments

darch

a darch instance

trainData

training input

targetData

training target

learn_rate_weight

leanring rate for the weight matrices

learn_rate_bias

learning rate for the biases

learn_rate_gamma

learning rate for the gammas

errorFunc

the error function to minimize during training

with_BN

logical value, T to train the neural net with batch normalization

Value

a darch instance with parameters updated with stochastic gradient descent

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.