Man pages for darch
Package for Deep Architectures and Restricted Boltzmann Machines

addLayerAdds a layer to the 'DArch' object
addLayer-DArch-methodAdds a layer to the 'DArch' object
addLayerFieldAdds a field to a layer
addLayerField-DArch-methodAdds a field to a layer
backpropagationBackpropagation learning function
contr.ltfrWrapper for 'contr.ltfr'
createDataSetCreate data set using data, targets, a formula, and possibly...
createDataSet-ANY-ANY-missing-DataSet-methodCreate new 'DataSet' by filling an existing one with new...
createDataSet-ANY-ANY-missing-missing-methodCreate 'DataSet' using data and targets.
createDataSet-ANY-missing-formula-missing-methodConstructor function for 'DataSet' objects.
crossEntropyErrorCross entropy error function
darchFit a deep neural network
darchBenchBenchmarking wrapper for 'darch'
DArch-classClass for deep architectures
darchModelInfoCreates a custom caret model for 'darch'.
darchTestTest classification network.
DataSetClass for specifying datasets.
exponentialLinearUnitExponential linear unit (ELU) function with unit derivatives.
fineTuneDArchFine tuning function for the deep architecture
fineTuneDArch-DArch-methodFine tuning function for the deep architecture
generateDropoutMaskDropout mask generator function.
generateRBMs-DArch-methodGenerates the RBMs for the pre-training.
generateWeightsGlorotNormalGlorot normal weight initialization
generateWeightsGlorotUniformGlorot uniform weight initialization
generateWeightsHeNormalHe normal weight initialization
generateWeightsHeUniformHe uniform weight initialization
generateWeightsNormalGenerates a weight matrix using rnorm.
generateWeightsUniformGenerates a weight matrix using runif
getDropoutMaskReturns the dropout mask for the given layer
getMomentumReturns the current momentum of the 'Net'.
linearUnitLinear unit function with unit derivatives.
linearUnitRbmCalculates the linear neuron output no transfer function
loadDArchLoads a DArch network
makeStartEndPointsMakes start- and end-points for the batches.
maxoutUnitMaxout / LWTA unit function
maxoutWeightUpdateUpdates the weight on maxout layers
minimizeMinimize a differentiable multivariate function.
minimizeAutoencoderConjugate gradient for a autoencoder network
minimizeClassifierConjugate gradient for a classification network
mseErrorMean squared error function
NetAbstract class for neural networks.
newDArchConstructor function for 'DArch' objects.
plot.DArchPlot 'DArch' statistics or structure.
predict.DArchForward-propagate data.
preTrainDArchPre-trains a 'DArch' network
preTrainDArch-DArch-methodPre-trains a 'DArch' network
print.DArchPrint 'DArch' details.
provideMNISTProvides MNIST data set in the given folder.
RBMClass for restricted Boltzmann machines
rbmUpdateFunction for updating the weights and biases of an 'RBM'
readMNISTFunction for generating .RData files of the MNIST Database
rectifiedLinearUnitRectified linear unit function with unit derivatives.
resetRBMResets the weights and biases of the 'RBM' object
rmseErrorRoot-mean-square error function
rpropagationResilient backpropagation training for deep architectures.
runDArchForward-propagates data through the network
runDArchDropoutForward-propagate data through the network with dropout...
saveDArchSaves a DArch network
setDarchParamsSet 'DArch' parameters
setDropoutMask-setSet the dropout mask for the given layer.
setLogLevelSet the log level.
show-DArch-methodPrint 'DArch' details.
sigmoidUnitSigmoid unit function with unit derivatives.
sigmoidUnitRbmCalculates the RBM neuron output with the sigmoid function
softmaxUnitSoftmax unit function with unit derivatives.
softplusUnitSoftplus unit function with unit derivatives.
tanhUnitContinuous Tan-Sigmoid unit function.
tanhUnitRbmCalculates the neuron output with the hyperbolic tangent...
trainRBMTrains an 'RBM' with contrastive divergence
validateDataSetValidate 'DataSet'
validateDataSet-DataSet-methodValidate 'DataSet'
weightDecayWeightUpdateUpdates the weight using weight decay.
darch documentation built on May 29, 2017, 8:14 p.m.