elman | R Documentation |
Elman networks are partially recurrent networks and similar to Jordan
networks (function jordan
). For details, see explanations
there.
elman(x, ...)
## Default S3 method:
elman(
x,
y,
size = c(5),
maxit = 100,
initFunc = "JE_Weights",
initFuncParams = c(1, -1, 0.3, 1, 0.5),
learnFunc = "JE_BP",
learnFuncParams = c(0.2),
updateFunc = "JE_Order",
updateFuncParams = c(0),
shufflePatterns = FALSE,
linOut = TRUE,
outContext = FALSE,
inputsTest = NULL,
targetsTest = NULL,
...
)
x |
a matrix with training inputs for the network |
... |
additional function parameters (currently not used) |
y |
the corresponding targets values |
size |
number of units in the hidden layer(s) |
maxit |
maximum of iterations to learn |
initFunc |
the initialization function to use |
initFuncParams |
the parameters for the initialization function |
learnFunc |
the learning function to use |
learnFuncParams |
the parameters for the learning function |
updateFunc |
the update function to use |
updateFuncParams |
the parameters for the update function |
shufflePatterns |
should the patterns be shuffled? |
linOut |
sets the activation function of the output units to linear or logistic |
outContext |
if TRUE, the context units are also output units (untested) |
inputsTest |
a matrix with inputs to test the network |
targetsTest |
the corresponding targets for the test input |
Learning in Elman networks:
Same as in Jordan networks (see jordan
).
Network architecture: The difference between Elman and Jordan networks is that in an Elman network the context units get input not from the output units, but from the hidden units. Furthermore, there is no direct feedback in the context units. In an Elman net, the number of context units and hidden units has to be the same. The main advantage of Elman nets is that the number of context units is not directly determined by the output dimension (as in Jordan nets), but by the number of hidden units, which is more flexible, as it is easy to add/remove hidden units, but not output units.
A detailed description of the theory and the parameters is available, as always, from the SNNS documentation and the other referenced literature.
an rsnns
object.
Elman, J. L. (1990), 'Finding structure in time', Cognitive Science 14(2), 179–211.
Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of Tübingen. https://www.ra.cs.uni-tuebingen.de/SNNS/welcome.html
Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)
jordan
## Not run: demo(iris)
## Not run: demo(laser)
## Not run: demo(eight_elman)
## Not run: demo(eight_elmanSnnsR)
data(snnsData)
inputs <- snnsData$eight_016.pat[,inputColumns(snnsData$eight_016.pat)]
outputs <- snnsData$eight_016.pat[,outputColumns(snnsData$eight_016.pat)]
par(mfrow=c(1,2))
modelElman <- elman(inputs, outputs, size=8, learnFuncParams=c(0.1), maxit=1000)
modelElman
modelJordan <- jordan(inputs, outputs, size=8, learnFuncParams=c(0.1), maxit=1000)
modelJordan
plotIterativeError(modelElman)
plotIterativeError(modelJordan)
summary(modelElman)
summary(modelJordan)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.