addLayer | Adds a layer to the 'DArch' object |
addLayer-DArch-method | Adds a layer to the 'DArch' object |
addLayerField | Adds a field to a layer |
addLayerField-DArch-method | Adds a field to a layer |
backpropagation | Backpropagation learning function |
contr.ltfr | Wrapper for 'contr.ltfr' |
createDataSet | Create data set using data, targets, a formula, and possibly... |
createDataSet-ANY-ANY-missing-DataSet-method | Create new 'DataSet' by filling an existing one with new... |
createDataSet-ANY-ANY-missing-missing-method | Create 'DataSet' using data and targets. |
createDataSet-ANY-missing-formula-missing-method | Constructor function for 'DataSet' objects. |
crossEntropyError | Cross entropy error function |
darch | Fit a deep neural network |
darchBench | Benchmarking wrapper for 'darch' |
DArch-class | Class for deep architectures |
darchModelInfo | Creates a custom caret model for 'darch'. |
darchTest | Test classification network. |
DataSet | Class for specifying datasets. |
exponentialLinearUnit | Exponential linear unit (ELU) function with unit derivatives. |
fineTuneDArch | Fine tuning function for the deep architecture |
fineTuneDArch-DArch-method | Fine tuning function for the deep architecture |
generateDropoutMask | Dropout mask generator function. |
generateRBMs-DArch-method | Generates the RBMs for the pre-training. |
generateWeightsGlorotNormal | Glorot normal weight initialization |
generateWeightsGlorotUniform | Glorot uniform weight initialization |
generateWeightsHeNormal | He normal weight initialization |
generateWeightsHeUniform | He uniform weight initialization |
generateWeightsNormal | Generates a weight matrix using rnorm. |
generateWeightsUniform | Generates a weight matrix using runif |
getDropoutMask | Returns the dropout mask for the given layer |
getMomentum | Returns the current momentum of the 'Net'. |
linearUnit | Linear unit function with unit derivatives. |
linearUnitRbm | Calculates the linear neuron output no transfer function |
loadDArch | Loads a DArch network |
makeStartEndPoints | Makes start- and end-points for the batches. |
maxoutUnit | Maxout / LWTA unit function |
maxoutWeightUpdate | Updates the weight on maxout layers |
minimize | Minimize a differentiable multivariate function. |
minimizeAutoencoder | Conjugate gradient for a autoencoder network |
minimizeClassifier | Conjugate gradient for a classification network |
mseError | Mean squared error function |
Net | Abstract class for neural networks. |
newDArch | Constructor function for 'DArch' objects. |
plot.DArch | Plot 'DArch' statistics or structure. |
predict.DArch | Forward-propagate data. |
preTrainDArch | Pre-trains a 'DArch' network |
preTrainDArch-DArch-method | Pre-trains a 'DArch' network |
print.DArch | Print 'DArch' details. |
provideMNIST | Provides MNIST data set in the given folder. |
RBM | Class for restricted Boltzmann machines |
rbmUpdate | Function for updating the weights and biases of an 'RBM' |
readMNIST | Function for generating .RData files of the MNIST Database |
rectifiedLinearUnit | Rectified linear unit function with unit derivatives. |
resetRBM | Resets the weights and biases of the 'RBM' object |
rmseError | Root-mean-square error function |
rpropagation | Resilient backpropagation training for deep architectures. |
runDArch | Forward-propagates data through the network |
runDArchDropout | Forward-propagate data through the network with dropout... |
saveDArch | Saves a DArch network |
setDarchParams | Set 'DArch' parameters |
setDropoutMask-set | Set the dropout mask for the given layer. |
setLogLevel | Set the log level. |
show-DArch-method | Print 'DArch' details. |
sigmoidUnit | Sigmoid unit function with unit derivatives. |
sigmoidUnitRbm | Calculates the RBM neuron output with the sigmoid function |
softmaxUnit | Softmax unit function with unit derivatives. |
softplusUnit | Softplus unit function with unit derivatives. |
tanhUnit | Continuous Tan-Sigmoid unit function. |
tanhUnitRbm | Calculates the neuron output with the hyperbolic tangent... |
trainRBM | Trains an 'RBM' with contrastive divergence |
validateDataSet | Validate 'DataSet' |
validateDataSet-DataSet-method | Validate 'DataSet' |
weightDecayWeightUpdate | Updates the weight using weight decay. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.