Description Usage Arguments Value Author(s) References See Also Examples
Creates a feedforward artificial neural network according to the structure established by the AMORE package standard.
1 2 | newff(n.neurons, learning.rate.global, momentum.global, error.criterium, Stao,
hidden.layer, output.layer, method)
|
n.neurons |
Numeric vector containing the number of neurons of each layer. The first element of the vector is the number of input neurons, the last is the number of output neurons and the rest are the number of neuron of the different hidden layers. |
learning.rate.global |
Learning rate at which every neuron is trained. |
momentum.global |
Momentum for every neuron. Needed by several training methods. |
error.criterium |
Criterium used to measure to proximity of the neural network prediction to its target. Currently we can choose amongst:
|
Stao |
Stao parameter for the TAO error criterium. Unused by the rest of criteria. |
hidden.layer |
Activation function of the hidden layer neurons. Available functions are:
|
output.layer |
Activation function of the hidden layer neurons according to the former list shown above. |
method |
Prefered training method. Currently it can be:
|
newff returns a multilayer feedforward neural network object.
Manuel Castejón Limas. manuel.castejon@gmail.com
Joaquin Ordieres Meré.
Ana González Marcos.
Alpha V. Pernía Espinoza.
Eliseo P. Vergara Gonzalez.
Francisco Javier Martinez de Pisón.
Fernando Alba Elías.
Pernía Espinoza, A.V., Ordieres Meré, J.B., Martínez de Pisón, F.J., González Marcos, A. TAO-robust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191–204, 2005.
Simon Haykin. Neural Networks – a Comprehensive Foundation. Prentice Hall, New Jersey, 2nd edition, 1999. ISBN 0-13-273350-1.
init.MLPneuron
, random.init.MLPnet
, random.init.MLPneuron
, select.activation.function
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #Example 1
library(AMORE)
# P is the input vector
P <- matrix(sample(seq(-1,1,length=1000), 1000, replace=FALSE), ncol=1)
# The network will try to approximate the target P^2
target <- P^2
# We create a feedforward network, with two hidden layers.
# The first hidden layer has three neurons and the second has two neurons.
# The hidden layers have got Tansig activation functions and the output layer is Purelin.
net <- newff(n.neurons=c(1,3,2,1), learning.rate.global=1e-2, momentum.global=0.5,
error.criterium="LMS", Stao=NA, hidden.layer="tansig",
output.layer="purelin", method="ADAPTgdwm")
result <- train(net, P, target, error.criterium="LMS", report=TRUE, show.step=100, n.shows=5 )
y <- sim(result$net, P)
plot(P,y, col="blue", pch="+")
points(P,target, col="red", pch="x")
|
index.show: 1 LMS 0.0898818369001767
index.show: 2 LMS 0.0898886526355751
index.show: 3 LMS 0.0899035077972986
index.show: 4 LMS 0.0899197712970041
index.show: 5 LMS 0.0899371520695861
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.