minimizeClassifier: Conjugate gradient for a classification network

Description Usage Arguments Details Value See Also Examples

View source: R/minimizeClassifier.R

Description

This function trains a DArch classifier network with the conjugate gradient method.

Usage

1
2
3
4
5
6
7
minimizeClassifier(darch, trainData, targetData,
  cg.length = getParameter(".cg.length"),
  cg.switchLayers = getParameter(".cg.length"),
  dropout = getParameter(".darch.dropout"),
  dropConnect = getParameter(".darch.dropout.dropConnect"),
  matMult = getParameter(".matMult"), debugMode = getParameter(".debug"),
  ...)

Arguments

darch

A instance of the class DArch.

trainData

The training data matrix.

targetData

The labels for the training data.

cg.length

Numbers of line search

cg.switchLayers

Indicates when to train the full network instead of only the upper two layers

dropout

See darch.dropout parameter of darch.

dropConnect

See darch.dropout.dropConnect parameter of darch.

matMult

Matrix multiplication function, internal parameter.

debugMode

Whether debug mode is enabled, internal parameter.

...

Further parameters.

Details

This function is build on the basis of the code from G. Hinton et. al. (http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html - last visit 2016-04-30) for the fine tuning of deep belief nets. The original code is located in the files 'backpropclassify.m', 'CG_MNIST.m' and 'CG_CLASSIFY_INIT.m'. It implements the fine tuning for a classification net with backpropagation using a direct translation of the minimize function from C. Rassmussen (available at http://www.gatsby.ucl.ac.uk/~edward/code/minimize/ - last visit 2016-04-30) to R. The parameter cg.switchLayers is for the switch between two training type. Like in the original code, the top two layers can be trained alone until epoch is equal to epochSwitch. Afterwards the entire network will be trained.

minimizeClassifier supports dropout but does not use the weight update function as defined via the darch.weightUpdateFunction parameter of darch, so that weight decay, momentum etc. are not supported.

Value

The trained DArch object.

See Also

darch, fineTuneDArch

Other fine-tuning functions: backpropagation, minimizeAutoencoder, rpropagation

Examples

1
2
3
4
5
6
7
8
9
## Not run: 
data(iris)
model <- darch(Species ~ ., iris,
 preProc.params = list(method = c("center", "scale")),
 darch.unitFunction = c("sigmoidUnit", "softmaxUnit"),
 darch.fineTuneFunction = "minimizeClassifier",
 cg.length = 3, cg.switchLayers = 5)

## End(Not run)

darch documentation built on May 29, 2017, 8:14 p.m.

Search within the darch package
Search all R packages, documentation and source code