sae.train: Training a Stacked Autoencoder

Description Usage Arguments Author(s)

View source: R/sae_train.R

Description

Training a Stacked Autoencoder

Usage

1
2
3
4
sae.train(x, hidden = c(10), activationfun = "sigm",
  learningrate = 0.8, momentum = 0.5, learningrate_scale = 1,
  output = "sigm", numepochs = 3, batchsize = 100,
  hidden_dropout = 0, visible_dropout = 0.2)

Arguments

x

matrix of x values for examples

hidden

vector for number of units of hidden layers.Default is c(10).

activationfun

activation function of hidden unit.Can be "sigm","linear" or "tanh".Default is "sigm" for logistic function

learningrate

learning rate for gradient descent. Default is 0.8.

momentum

momentum for gradient descent. Default is 0.5 .

learningrate_scale

learning rate will be mutiplied by this scale after every iteration. Default is 1 .

output

function of output unit, can be "sigm","linear" or "softmax". Default is "sigm".

numepochs

number of iteration for samples Default is 3.

batchsize

size of mini-batch. Default is 100.

hidden_dropout

drop out fraction for hidden layer. Default is 0.

visible_dropout

drop out fraction for input layer Default is 0.

Author(s)

Xiao Rong


DimitriF/DLC documentation built on Oct. 14, 2020, 4:33 p.m.