Description Usage Arguments Format Value Diagnostic specifications Progress Examples
Performs fine-tuning on the DBN network with backpropagation.
1 2 3 4 5 6 7 8 9 | train(x, data, miniters = 100, maxiters = 1000, batchsize = 100,
optim.control = list(),
continue.function = continue.function.exponential,
continue.function.frequency = 100, continue.stop.limit = 3,
diag = list(rate = diag.rate, data = diag.data, f = diag.function),
diag.rate = c("none", "each", "accelerate"), diag.data = NULL,
diag.function = NULL, n.proc = detectCores() - 1, ...)
train.progress
|
x |
the DBN |
data |
the training data |
miniters, maxiters |
minimum and maximum number of iterations to perform |
batchsize |
the size of the batches on which error & gradients are averaged |
optim.control |
control arguments for the optim function that are not typically changed for normal operation. The parameters are: maxit, type, trace, steplength, stepredn, acctol, reltest, abstol, intol, setstep. Their default values are defined in TrainParameters.h. |
continue.function |
that can stop the training between miniters and maxiters if it returns |
continue.function.frequency |
the frequency at which continue.function will be assessed. |
continue.stop.limit |
the number of consecutive times |
diag, diag.rate, diag.data, diag.function |
diagnmostic specifications. See details. |
n.proc |
number of cores to be used for Eigen computations |
... |
ignored |
An object of class list
of length 3.
the fine-tuned DBN
The specifications can be passed directly in a list with elements rate
, data
and f
, or separately with parameters diag.rate
, diag.data
and diag.function
. The function must be of the following form:
function(rbm, batch, data, iter, batchsize, maxiters)
rbm
: the RBM object after the training iteration.
batch
: the batch that was used at that iteration.
data
: the data provided in diag.data
or diag$data
.
iter
: the training iteration number, starting from 0 (before the first iteration).
batchsize
: the size of the batch.
maxiters
: the target number of iterations.
Note the absence of the layer
argument that is available only in pretrain
.
The following diag.rate
or diag$rate
are supported:
“none”: the diag function will never be called.
“each”: the diag function will be called before the first iteration, and at the end of each iteration.
“accelerate”: the diag function will called before the first iteration, at the first 200 iterations, and then with a rate slowing down proportionally with the iteration number. It is always called at the last iteration.
Note that diag functions incur a slight overhead as they involve a callback to R and multiple object conversions. Setting diag.rate = "none"
removes any overhead.
train.progress
is a convenient pre-built diagnostic specification that displays a progress bar.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | data(pretrained.mnist)
## Not run:
# Fine-tune the DBN with backpropagation
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 2000, batchsize = 1000,
optim.control = list(maxit = 10))
## End(Not run)
## Not run:
# Train with a progress bar
# In this case the overhead is nearly 0
diag <- list(rate = "each", data = NULL, f = function(rbm, batch, data, iter, batchsize, maxiters) {
if (iter == 0) {
DBNprogressBar <<- txtProgressBar(min = 0, max = maxiters, initial = 0,
width = NA, style = 3)
}
else if (iter == maxiters) {
setTxtProgressBar(DBNprogressBar, iter)
close(DBNprogressBar)
}
else {
setTxtProgressBar(DBNprogressBar, iter)
}
})
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 1000, batchsize = 100,
continue.function = continue.function.always, diag = diag)
# Equivalent to using train.progress
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 1000, batchsize = 100,
continue.function = continue.function.always, diag = train.progress)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.