Description Usage Arguments Details
The procedure for DBM fitting consists of two parts:
First a stack of RBMs is pretrained in a greedy layerwise manner
(see ds.monitored_stackrbms
). Then the weights of all layers are jointly
trained using the general Boltzmann machine learning procedure.
During pre-training and fine-tuning, monitoring data is collected by default.
The monitoring data is returned to the user.
The trained model is stored on the server side (see parameter newobj
).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | ds.monitored_fitdbm(
datasources,
newobj = "dbm",
data = "D",
monitoring = "logproblowerbound",
monitoringdata = data,
monitoringpretraining = "reconstructionerror",
monitoringdatapretraining = monitoringdata,
nhiddens = NULL,
epochs = NULL,
nparticles = NULL,
learningrate = NULL,
learningrates = NULL,
learningratepretraining = NULL,
epochspretraining = NULL,
batchsizepretraining = NULL,
pretraining = NULL
)
|
datasources |
A list of Opal object(s) as a handle to the server-side session |
newobj |
The name of the variable in which the trained DBM will be stored.
Defaults to |
data |
The name of the variable that holds the data on the server-side.
Defaults to |
monitoring |
Name(s) for the monitoring options used for monitoring the fine-tuning. Possible options:
|
monitoringdata |
A vector of names for server-side data sets that are to be used for monitoring |
monitoringpretraining |
Name for monitoring options used for monitoring the pre-training.
The options are the same as for
training an RBM (see |
monitoringdatapretraining |
A vector of names for data sets that are to be used for
monitoring the pretraining. By default, this is the same as the |
nhiddens |
A vector that defines the number of nodes in the hidden layers of the DBM. The default value specifies two hidden layers with the same size as the visible layer. |
epochs |
Number of training epochs for fine-tuning, defaults to 10 |
nparticles |
Number of particles used for sampling during fine-tuning of the DBM, defaults to 100 |
learningrate |
Learning rate for joint training of layers (= fine-tuning)
using the learning algorithm for a general Boltzmann machine
with mean-field approximation.
The learning rate for fine tuning is by default decaying with the number of epochs,
starting with the given value for the |
learningrates |
A vector of learning rates for each epoch of fine-tuning |
learningratepretraining |
Learning rate for pretraining,
defaults to |
epochspretraining |
Number of training epochs for pretraining,
defaults to |
batchsizepretraining |
Batchsize for pretraining, defaults to 1 |
pretraining |
The arguments for layerwise pretraining
can be specified for each layer individually.
This is done via a vector of names for objects that have previously been defined
by |
If the option dsBoltzmannMachines.shareModels
is set to TRUE
by an administrator at the server side, the models themselves are returned in addition.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.