Description Usage Arguments Value References See Also Examples
View source: R/weightUpdateFunctions.R
On maxout layers, only the weights of active units are altered, additionally all weights within a pool must be the same.
1 2 3 | maxoutWeightUpdate(darch, layerIndex, weightsInc, biasesInc, ...,
weightDecay = getParameter(".darch.weightDecay", 0, darch),
poolSize = getParameter(".darch.maxout.poolSize", 2, darch))
|
darch |
DArch instance. |
layerIndex |
Layer index within the network. |
weightsInc |
Matrix containing scheduled weight updates from the fine-tuning algorithm. |
biasesInc |
Bias weight updates. |
... |
Additional parameters, not used. |
weightDecay |
Weights are multiplied by (1 - |
poolSize |
Size of maxout pools, see parameter
|
The updated weights.
Goodfellow, Ian J., David Warde-Farley, Mehdi Mirza, Aaron C. Courville, and Yoshua Bengio (2013). "Maxout Networks". In: Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 1319-1327. URL: http://jmlr.org/proceedings/papers/v28/goodfellow13.html
Other weight update functions: weightDecayWeightUpdate
1 2 3 4 5 6 7 8 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.