SAENET.train: Build a stacked Autoencoder.

Description Usage Arguments Value Examples

View source: R/SAENET.R

Description

Build a stacked Autoencoder.

Usage

1
2
3
4
SAENET.train(X.train, n.nodes = c(4, 3, 2), unit.type = c("logistic",
  "tanh"), lambda, beta, rho, epsilon, optim.method = c("BFGS", "L-BFGS-B",
  "CG"), rel.tol = sqrt(.Machine$double.eps), max.iterations = 2000,
  rescale.flag = F, rescaling.offset = 0.001)

Arguments

X.train

A matrix of training data.

n.nodes

A vector of numbers containing the number of units at each hidden layer.

unit.type

hidden unit activation type as per autoencode() params.

lambda

Vector of scalars indicating weight decay per layer as per autoencode().

beta

Vector of scalars indicating sparsity penalty per layer as per autoencode().

rho

Vector of scalars indicating sparsity parameter per layer as per autoencode().

epsilon

Vector of scalars indicating initialisation parameter for weights per layer as per autoencode().

optim.method

Optimization method as per optim().

rel.tol

Relative convergence tolerance as per optim()

max.iterations

Maximum iterations for optim().

rescale.flag

A logical flag indicating whether input data should be rescaled.

rescaling.offset

A small non-negative value used for rescaling. Further description available in the documentation of autoencoder.

Value

An object of class SAENET containing the following elements for each layer of the stacked autoencoder:

ae.out

An object of class autoencoder containing the autoencoder created in that layer of the stacked autoencoder.

X.output

In layers subsequent to the first, a matrix containing the activations of the hidden neurons.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
library(autoencoder)
data(iris)
#### Train a stacked sparse autoencoder with a (5,3) architecture and
#### a relatively minor sparsity penalty. Try experimenting with the
#### lambda and beta parameters if you haven't worked with sparse
#### autoencoders before - it's worth inspecting the final layer
#### to ensure that output activations haven't simply converged to the value of
#### rho that you gave (which is the desired activation level on average).
#### If the lambda/beta parameters are set high, this is likely to happen.


output <- SAENET.train(as.matrix(iris[,1:4]), n.nodes = c(5,3),
                       lambda = 1e-5, beta = 1e-5, rho = 0.01, epsilon = 0.01)

Example output

autoencoding...
Optimizer counts:
function gradient 
      18       16 
Optimizer: successful convergence.
Optimizer: convergence = 0, message = 
J.init = 25.36306, J.final = 19.94066, mean(rho.hat.final) = 0.999901
autoencoding...
Optimizer counts:
function gradient 
     175      172 
Optimizer: successful convergence.
Optimizer: convergence = 0, message = 
J.init = 0.6248543, J.final = 1.872471e-07, mean(rho.hat.final) = 0.01004466

SAENET documentation built on May 30, 2017, 1:51 a.m.

Related to SAENET.train in SAENET...