predict.autoencoder: Predict outputs of a sparse autoencoder

Description Usage Arguments Details Value Author(s) Examples

Description

Predict outputs for a given set of inputs, and estimate mean squared error between inputs and outputs of a sparse autoencoder network trained using autoencode function. Optionally, predict outputs of units in the hidden layer of the autoencoder instead of outputs of units in the output layer.

Usage

1
2
## S3 method for class 'autoencoder'
predict(object, X.input=NULL, hidden.output = c(F, T), ...)

Arguments

object

an object of class autoencoder produced by the autoencode function.

X.input

a matrix of inputs, with columns corresponding to the columns of the training matrix used in training the autoencoder object, and an arbitrary number of rows corresponding to the number of inputs.

hidden.output

a logical switch telling whether to produce outputs of units in the output layer (for hidden.output=FALSE) or of units in the hidden layer (for hidden.output=TRUE). The latter can be used for stacked autoencoders.

...

not used.

Details

All the information about the autoencoder (weights and biases, unit type, rescaling data) is contained in object of class autoencoder produced by autoencode function. See autoencode for details about the autoencoder class object.

Value

A list with elements:

X.output

output matrix with rows corresponding to outputs corresponding to examples (rows) in X.input. Depending on the value of hidden.output, these are outputs of units in the hidden layer (for hidden.output=TRUE) or in the output layer (for hidden.output=FALSE).

hidden.output

(same as hidden.output in the arguments) logical flag telling whether the outputs are generated by units in the hidden layer (for hidden.output=TRUE) or in the output layer (for hidden.output=FALSE) of the autoencoder.

mean.error

average, over rows, sum of (X.output - X.input)^2.

Author(s)

Eugene Dubossarsky (project leader, chief designer), Yuriy Tyshetskiy (design, implementation, testing)

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
## Train the autoencoder on unlabeled set of 5000 image patches of 
## size Nx.patch by Ny.patch, randomly cropped from 10 nature photos:

## Load a training matrix with rows corresponding to training examples, 
## and columns corresponding to input channels (e.g., pixels in images):
data('training_matrix_N=5e3_Ninput=100')  ## the matrix contains 5e3 image 
                                          ## patches of 10 by 10 pixels

## Set up the autoencoder architecture:
nl=3                          ## number of layers (default is 3: input, hidden, output)
unit.type = "logistic"        ## specify the network unit type, i.e., the unit's 
                              ## activation function ("logistic" or "tanh")
Nx.patch=10                   ## width of training image patches, in pixels
Ny.patch=10                   ## height of training image patches, in pixels
N.input = Nx.patch*Ny.patch   ## number of units (neurons) in the input layer (one unit per pixel)
N.hidden = 10*10              ## number of units in the hidden layer
lambda = 0.0002               ## weight decay parameter     
beta = 6                      ## weight of sparsity penalty term 
rho = 0.01                    ## desired sparsity parameter
epsilon <- 0.001              ## a small parameter for initialization of weights 
                              ## as small gaussian random numbers sampled from N(0,epsilon^2)
max.iterations = 2000         ## number of iterations in optimizer

## Train the autoencoder on training.matrix using BFGS optimization method 
## (see help('optim') for details):
## Not run: 
autoencoder.object <- autoencode(X.train=training.matrix,nl=nl,N.hidden=N.hidden,
          unit.type=unit.type,lambda=lambda,beta=beta,rho=rho,epsilon=epsilon,
          optim.method="BFGS",max.iterations=max.iterations,
          rescale.flag=TRUE,rescaling.offset=0.001)
          
## End(Not run)
## N.B.: Training this autoencoder takes a long time, so in this example we do not run the above 
## autoencode function, but instead load the corresponding pre-trained autoencoder.object.

## Report mean squared error for training and test sets:
cat("autoencode(): mean squared error for training set: ",
round(autoencoder.object$mean.error.training.set,3),"\n")

## Visualize hidden units' learned features:
visualize.hidden.units(autoencoder.object,Nx.patch,Ny.patch)

## Compare the output and input images (the autoencoder learns to approximate 
## inputs in its outputs using features learned by the hidden layer):

## Predict the output matrix corresponding to the training matrix 
## (rows are examples, columns are input channels, i.e., pixels)
X.output <- predict(autoencoder.object, X.input=training.matrix, hidden.output=FALSE)$X.output 

## Compare outputs and inputs for 3 image patches (patches 7,26,16 from 
## the training set) - outputs should be similar to inputs:
op <- par(no.readonly = TRUE)   ## save the whole list of settable par's.
par(mfrow=c(3,2),mar=c(2,2,2,2))
for (n in c(7,26,16)){
## input image:
  image(matrix(training.matrix[n,],nrow=Ny.patch,ncol=Nx.patch),axes=FALSE,main="Input image",
  col=gray((0:32)/32))
## output image:
  image(matrix(X.output[n,],nrow=Ny.patch,ncol=Nx.patch),axes=FALSE,main="Output image",
  col=gray((0:32)/32))
}
par(op)  ## restore plotting par's

autoencoder documentation built on May 2, 2019, 5:52 a.m.