This function finetunes a DArch network using SGD approach

1 2 3 | ```
finetune_SGD_bn(darch, trainData, targetData, learn_rate_weight = exp(-10),
learn_rate_bias = exp(-10), learn_rate_gamma = exp(-10),
errorFunc = meanSquareErr, with_BN = T)
``` |

`darch` |
a darch instance |

`trainData` |
training input |

`targetData` |
training target |

`learn_rate_weight` |
leanring rate for the weight matrices |

`learn_rate_bias` |
learning rate for the biases |

`learn_rate_gamma` |
learning rate for the gammas |

`errorFunc` |
the error function to minimize during training |

`with_BN` |
logical value, T to train the neural net with batch normalization |

a darch instance with parameters updated with stochastic gradient descent

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.