finetune_SGD_bn: Updates a deep neural network's parameters using stochastic...

Description Usage Arguments Value

Description

This function finetunes a DArch network using SGD approach

Usage

1
2
3
finetune_SGD_bn(darch, trainData, targetData, learn_rate_weight = exp(-10),
  learn_rate_bias = exp(-10), learn_rate_gamma = exp(-10),
  errorFunc = meanSquareErr, with_BN = T)

Arguments

darch

a darch instance

trainData

training input

targetData

training target

learn_rate_weight

leanring rate for the weight matrices

learn_rate_bias

learning rate for the biases

learn_rate_gamma

learning rate for the gammas

errorFunc

the error function to minimize during training

with_BN

logical value, T to train the neural net with batch normalization

Value

a darch instance with parameters updated with stochastic gradient descent


deeplearning documentation built on Jan. 15, 2017, 9:52 a.m.