finetune_SGD_bn: Updates a deep neural network's parameters using stochastic...

Description Usage Arguments Value

Description

This function finetunes a DArch network using SGD approach

Usage

1
2
3
finetune_SGD_bn(darch, trainData, targetData, learn_rate_weight = exp(-10),
  learn_rate_bias = exp(-10), learn_rate_gamma = exp(-10),
  errorFunc = meanSquareErr, with_BN = T)

Arguments

darch

a darch instance

trainData

training input

targetData

training target

learn_rate_weight

leanring rate for the weight matrices

learn_rate_bias

learning rate for the biases

learn_rate_gamma

learning rate for the gammas

errorFunc

the error function to minimize during training

with_BN

logical value, T to train the neural net with batch normalization

Value

a darch instance with parameters updated with stochastic gradient descent



Search within the deeplearning package
Search all R packages, documentation and source code

Questions? Problems? Suggestions? or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.