train: Fine-tunes the DeepBeliefNet

Description Usage Arguments Format Value Diagnostic specifications Progress Examples

View source: R/train.R

Description

Performs fine-tuning on the DBN network with backpropagation.

Usage

1
2
3
4
5
6
7
8
9
train(x, data, miniters = 100, maxiters = 1000, batchsize = 100,
  optim.control = list(),
  continue.function = continue.function.exponential,
  continue.function.frequency = 100, continue.stop.limit = 3,
  diag = list(rate = diag.rate, data = diag.data, f = diag.function),
  diag.rate = c("none", "each", "accelerate"), diag.data = NULL,
  diag.function = NULL, n.proc = detectCores() - 1, ...)

train.progress

Arguments

x

the DBN

data

the training data

miniters, maxiters

minimum and maximum number of iterations to perform

batchsize

the size of the batches on which error & gradients are averaged

optim.control

control arguments for the optim function that are not typically changed for normal operation. The parameters are: maxit, type, trace, steplength, stepredn, acctol, reltest, abstol, intol, setstep. Their default values are defined in TrainParameters.h.

continue.function

that can stop the training between miniters and maxiters if it returns FALSE. By default, continue.function.exponential will be used. An alternative is to use continue.function.always that will always return true and thus carry on with the training until maxiters is reached. A user-supplied function must accept (error, iter, batchsize) as input and return a logical of length 1. The training is stopped when it returns FALSE.

continue.function.frequency

the frequency at which continue.function will be assessed.

continue.stop.limit

the number of consecutive times continue.function must return FALSE before the training is stopped. For example, 1 will stop as soon as continue.function returns FALSE, whereas Inf will ensure the result of continue.function is never enforced (but the function is still executed). The default is 3 so the training will continue until 3 consecutive calls of continue.function returned FALSE, giving more robustness to the decision.

diag, diag.rate, diag.data, diag.function

diagnmostic specifications. See details.

n.proc

number of cores to be used for Eigen computations

...

ignored

Format

An object of class list of length 3.

Value

the fine-tuned DBN

Diagnostic specifications

The specifications can be passed directly in a list with elements rate, data and f, or separately with parameters diag.rate, diag.data and diag.function. The function must be of the following form: function(rbm, batch, data, iter, batchsize, maxiters)

Note the absence of the layer argument that is available only in pretrain.

The following diag.rate or diag$rate are supported:

Note that diag functions incur a slight overhead as they involve a callback to R and multiple object conversions. Setting diag.rate = "none" removes any overhead.

Progress

train.progress is a convenient pre-built diagnostic specification that displays a progress bar.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
data(pretrained.mnist)

## Not run: 
# Fine-tune the DBN with backpropagation
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 2000, batchsize = 1000,
                       optim.control = list(maxit = 10))

## End(Not run)
## Not run: 
# Train with a progress bar
# In this case the overhead is nearly 0
diag <- list(rate = "each", data = NULL, f = function(rbm, batch, data, iter, batchsize, maxiters) {
	if (iter == 0) {
		DBNprogressBar <<- txtProgressBar(min = 0, max = maxiters, initial = 0, 
		                                  width = NA, style = 3)
	}
	else if (iter == maxiters) {
		setTxtProgressBar(DBNprogressBar, iter)
		close(DBNprogressBar)
	}
	else {
		setTxtProgressBar(DBNprogressBar, iter)
	}
})
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 1000, batchsize = 100,
                       continue.function = continue.function.always, diag = diag)
# Equivalent to using train.progress
trained.mnist <- train(unroll(pretrained.mnist), mnist$train$x, maxiters = 1000, batchsize = 100,
                       continue.function = continue.function.always, diag = train.progress)

## End(Not run)

xrobin/DeepLearning documentation built on Sept. 18, 2020, 5:23 a.m.