Description Usage Arguments Value Examples
Build a stacked Autoencoder.
1 2 3 4 |
X.train |
A matrix of training data. |
n.nodes |
A vector of numbers containing the number of units at each hidden layer. |
unit.type |
hidden unit activation type as
per |
lambda |
Vector of scalars indicating weight decay per layer
as per |
beta |
Vector of scalars indicating sparsity penalty per layer
as per |
rho |
Vector of scalars indicating sparsity parameter per layer
as per |
epsilon |
Vector of scalars indicating initialisation parameter
for weights per layer as per |
optim.method |
Optimization method as per |
rel.tol |
Relative convergence tolerance as per |
max.iterations |
Maximum iterations for |
rescale.flag |
A logical flag indicating whether input data should be rescaled. |
rescaling.offset |
A small non-negative value used for rescaling.
Further description available in the documentation
of |
An object of class SAENET
containing the following elements for each layer of the stacked autoencoder:
ae.out |
An object of class |
X.output |
In layers subsequent to the first, a matrix containing the activations of the hidden neurons. |
1 2 3 4 5 6 7 8 9 10 11 12 13 | library(autoencoder)
data(iris)
#### Train a stacked sparse autoencoder with a (5,3) architecture and
#### a relatively minor sparsity penalty. Try experimenting with the
#### lambda and beta parameters if you haven't worked with sparse
#### autoencoders before - it's worth inspecting the final layer
#### to ensure that output activations haven't simply converged to the value of
#### rho that you gave (which is the desired activation level on average).
#### If the lambda/beta parameters are set high, this is likely to happen.
output <- SAENET.train(as.matrix(iris[,1:4]), n.nodes = c(5,3),
lambda = 1e-5, beta = 1e-5, rho = 0.01, epsilon = 0.01)
|
autoencoding...
Optimizer counts:
function gradient
18 16
Optimizer: successful convergence.
Optimizer: convergence = 0, message =
J.init = 25.36306, J.final = 19.94066, mean(rho.hat.final) = 0.999901
autoencoding...
Optimizer counts:
function gradient
175 172
Optimizer: successful convergence.
Optimizer: convergence = 0, message =
J.init = 0.6248543, J.final = 1.872471e-07, mean(rho.hat.final) = 0.01004466
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.