Description Usage Arguments Value Examples
Obtain the compressed representation of new data for specified layers from a stacked autoencoder.
1 | SAENET.predict(h, new.data, layers = c(1), all.layers = FALSE)
|
h |
The object returned from |
new.data |
A matrix of training data. |
layers |
A numeric vector indicating which layers of the stacked autoencoder to return output for |
all.layers |
A boolean value indicating whether to override |
A list, for which each element corresponds to the output of predict.autoencoder()
from package autoencoder
for the specified layers of the stacked autoencoder.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | library(autoencoder)
data(iris)
#### Train a stacked sparse autoencoder with a (5,3) architecture and
#### a relatively minor sparsity penalty. Try experimenting with the
#### lambda and beta parameters if you haven't worked with sparse
#### autoencoders before - it's worth inspecting the final layer
#### to ensure that output activations haven't simply converged to the value of
#### rho that you gave (which is the desired activation level on average).
#### If the lambda/beta parameters are set high, this is likely to happen.
output <- SAENET.train(as.matrix(iris[1:100,1:4]), n.nodes = c(5,3),
lambda = 1e-5, beta = 1e-5, rho = 0.01, epsilon = 0.01)
predict.out <- SAENET.predict(output, as.matrix(iris[101:150,1:4]), layers = c(2))
|
autoencoding...
Optimizer counts:
function gradient
19 17
Optimizer: successful convergence.
Optimizer: convergence = 0, message =
J.init = 20.07615, J.final = 15.44573, mean(rho.hat.final) = 0.9998588
autoencoding...
Optimizer counts:
function gradient
191 187
Optimizer: successful convergence.
Optimizer: convergence = 0, message =
J.init = 0.6235624, J.final = 7.775034e-08, mean(rho.hat.final) = 0.01018348
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.