mlp: Multilayer perceptron

Description Usage Arguments Value Class Methods Details Examples

Description

Multilayer perceptron

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
mlp(output = 'sigm', hidden_sizes = 10, activation = 'sigm',
  learn_rate = 0.9, learn_rate_decay = 1, momentum = 0.5,
  num_epoch = 5, batch_size = 100,
  hidden_dropout = 0, visible_dropout = 0)

mlp_classifier(hidden_sizes = 10, activation = 'sigm',
  learn_rate = 0.9, learn_rate_decay = 1, momentum = 0.5,
  num_epoch = 5, batch_size = 100,
  hidden_dropout = 0, visible_dropout = 0)

mlp_regressor(hidden_sizes = 10, activation = 'sigm',
  learn_rate = 0.9, learn_rate_decay = 1, momentum = 0.5,
  num_epoch = 5, batch_size = 100,
  hidden_dropout = 0, visible_dropout = 0)

Arguments

output

output unit form. 'sigm', 'linear' or 'softmax'

hidden_sizes

integer vector of hidden unit sizes

activation

activation function. 'sigm', 'tanh' or 'linear'

learn_rate

learning rate

learn_rate_decay

scale multipled to learning rate after each iteration

momentum

momentum for gradient descent

num_epoch

number of iteration

batch_size

mini-batch size

hidden_dropout

drop out fraction for hidden layer

visible_dropout

drop out fraction for input layer

Value

MLP class object

Class Methods

fit(x, y)

train neural network

predict(x, ...)

return predicted values

incr_fit(x, y)

train neural network incrementally

predict_proba(x, ...)

return probability prediction

mse(x, y)

return the mean-squared error

cross_entropy(x, y)

return the cross entropy loss if appropriate

accuracy(x, y)

return the classification accuracy if appropriate

Details

Uses nn.train as the backend. fit method trains the network from the scratch; Use incr_fit method for incremental learning.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
set.seed(123)
# example from \code{\link{deepnet::nn.train}}
Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
x <- matrix(c(Var1, Var2), nrow = 100, ncol = 2)
y <- c(rep(1, 50), rep(0, 50))
m <- mlp(hidden_sizes=5, learn_rate=0.8, num_epoch=3)
m$fit(x, y)
m$mse(x, y)

# classification example
data(iris)
x <- iris[,-5]
y <- iris[,5]
tr <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25))
m <- mlp_classifier(num_epoch=300)
m$fit(x[tr,], y[tr])
table(y[-tr], m$predict(x[-tr,]))
m$accuracy(x[-tr,], y[-tr])
m$cross_entropy(x[-tr,], y[-tr])

## Not run: 
# regression example (takes a few seconds)
n <- 1000
x <- runif(2*n)
dim(x) <- c(n, 2)
y <- pmin(x[,1], x[,2])
m <- mlp_regressor(hidden_sizes=c(10), num_epoch=500, batch_size=25)
m$fit(x, y)
newx <- expand.grid(x1=seq(0, 1, length=50), x2=seq(0, 1, length=50))
pred <- m$predict(newx)
true <- pmin(newx[,1], newx[,2])
cor(true, pred)
dim(pred) <- c(50, 50)
dim(true) <- c(50, 50)
par(mfrow=c(1, 2))
contour(true)
contour(pred)
m$mse(newx, as.numeric(true))

## End(Not run)

kota7/MLPipe documentation built on May 5, 2019, 5:53 p.m.