Description Usage Arguments Value Examples
Trains a deep belief network starting with a greedy pretrained stack of RBM's (unsupervised) using the function StackRBM and then DBN adds a supervised output layer. The stacked RBM is then finetuned on the supervised criterion by using backpropogation.
1 2 3 |
x |
A matrix with binary features of shape samples * features. |
y |
A matrix with labels for the data. (Always needs to be provided for training the DBN) |
n.iter |
The number of epochs to run backpropogation. |
nodes |
A vector with the number of hidden nodes at each layer |
learning.rate |
Learning rate for supervised finetuning of Stacked RBM. |
size.minibatch |
The size of the minibatches used for training. |
n.iter.pre |
Past on to the StackRBM function, defines how many epochs are used to pretrain each RBM layer. |
learning.rate.pre |
The pretraining learning rate, passed on to the StackRBM function. |
verbose |
Whether to print th training error at each epoch, printing will slow down the fitting. |
Returns the finetuned DBN model that can be used in the PredictDBN function.
1 2 3 4 5 6 7 8 9 10 | # Load the MNIST dat
data(MNIST)
# Train the DBN model
modDBN <- DBN(MNIST$trainX, MNIST$trainY,n.iter = 500, nodes = c(500, 300, 150), learning.rate = 0.5,
size.minibatch = 10, n.iter.pre = 300, learning.rate.pre = 0.1, verbose = FALSE)
# Turn Verbose on to check the learning progress
modDBN <- DBN(MNIST$trainX, MNIST$trainY,n.iter = 500, nodes = c(500, 300, 150), learning.rate = 0.5,
size.minibatch = 10, n.iter.pre = 300, learning.rate.pre = 0.1, verbose = TRUE)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.