Description Usage Arguments Details Value References See Also Examples
View source: R/backpropagation.R
This function provides the backpropagation algorithm for deep architectures.
1 2 3 4 5 6 7 8 9 | backpropagation(darch, trainData, targetData,
bp.learnRate = getParameter(".bp.learnRate", rep(1, times =
length(darch@layers))),
bp.learnRateScale = getParameter(".bp.learnRateScale"),
nesterovMomentum = getParameter(".darch.nesterovMomentum"),
dropout = getParameter(".darch.dropout", rep(0, times = length(darch@layers)
+ 1), darch), dropConnect = getParameter(".darch.dropout.dropConnect"),
matMult = getParameter(".matMult"), debugMode = getParameter(".debug", F),
...)
|
darch |
An instance of the class |
trainData |
The training data (inputs). |
targetData |
The target data (outputs). |
bp.learnRate |
Learning rates for backpropagation, length is either one or the same as the number of weight matrices when using different learning rates for each layer. |
bp.learnRateScale |
The learn rate is multiplied by this value after each epoch. |
nesterovMomentum |
See |
dropout |
See |
dropConnect |
See |
matMult |
Matrix multiplication function, internal parameter. |
debugMode |
Whether debug mode is enabled, internal parameter. |
... |
Further parameters. |
The only backpropagation-specific, user-relevant parameters are
bp.learnRate
and bp.learnRateScale
; they can be passed to the
darch
function when enabling backpropagation
as the
fine-tuning function. bp.learnRate
defines the backpropagation
learning rate and can either be specified as a single scalar or as a vector
with one entry for each weight matrix, allowing for per-layer learning rates.
bp.learnRateScale
is a single scalar which contains a scaling factor
for the learning rate(s) which will be applied after each epoch.
Backpropagation supports dropout and uses the weight update function as
defined via the darch.weightUpdateFunction
parameter of
darch
.
The trained deep architecture
Rumelhart, D., G. E. Hinton, R. J. Williams, Learning representations by backpropagating errors, Nature 323, S. 533-536, DOI: 10.1038/323533a0, 1986.
Other fine-tuning functions: minimizeAutoencoder
,
minimizeClassifier
,
rpropagation
1 2 3 4 5 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.