Implementation of ELM ( Extreme Learning Machine ) algorithm for neural networks


ELM algorithm is an alternative training method for SLFN ( Single Hidden Layer Feedforward Networks ) which does not need any iterative tuning nor setting parameters such as learning rate, momentum, etc., which are current issues of the traditional gradient-based learning algorithms ( like backpropagation ).
Training of a SLFN with ELM is a three-step learning model:
Given a training set P = {(xi , ti )|xi E R , ti E R , i = 1,..., N}, hidden node output function G(a, b, x), and the number of hidden nodes L
1) Assign randomly hidden node parameters (ai , bi ), i = 1,..., L. It means that the arc weights between the input layer and the hidden layer and the hidden layer bias are randomly generated.
2) Calculate the hidden layer output matrix H using one of the available activation functions.
3) Calculate the output weights B: B = ginv(H) %*% T ( matrix multiplication ), where T is the target output of the training set.
ginv(H) is the Moore-Penrose generalized inverse of hidden layer output matrix H. This is calculated by the MASS package function ginv.
Once the SLFN has been trained, the output of a generic test set is simply Y = H %*% B ( matrix multiplication ). Salient features:
- The learning speed of ELM is extremely fast.
- Unlike traditional gradient-based learning algorithms which only work for differentiable activation functions, ELM works for all bounded nonconstant piecewise continuous activation functions.
- Unlike traditional gradient-based learning algorithms facing several issues like local minima, improper learning rate and overfitting, etc, ELM tends to reach the solutions straightforward without such trivial issues.
- The ELM learning algorithm looks much simpler than other popular learning algorithms: neural networks and support vector machines.


Package: elmNN
Type: Package
Version: 1.0
Date: 2012-07-17
License: GPL (>= 2)

To fit a neural network, the function to use is elmtrain ( default version is elmtrain.formula ). To predict values, the function to use is predict. Other functions are used internally by the training and predict functions.


Alberto Gosso

Maintainer: Alberto Gosso <>

G.-B. Huang, H. Zhou, X. Ding, R. Zhang (2011) Extreme Learning Machine for Regression and Multiclass Classification IEEE Transactions on Systems, Man, and Cybernetics - part B: Cybernetics, vol. 42, no. 2, 513-529
G.-B. Huang, Q.-Y. Zhu, C.-K. Siew (2006) Extreme learning machine: Theory and applications Neurocomputing 70 (2006) 489-501

See Also

elmtrain.formula to train a neural network,predict.elmNN to predict values from a trained neural network


Var1 <- runif(50, 0, 100) <- data.frame(Var1, Sqrt=sqrt(Var1))
model <- elmtrain(Sqrt~Var1,, nhid=10, actfun="sig")
new <- data.frame(Sqrt=0,Var1 = runif(50,0,100))
p <- predict(model,newdata=new)

Var2 <- runif(50, 0, 10) <- data.frame(Var2, Quad=(Var2)^2)
model <- elmtrain(Quad~Var2,, nhid=10, actfun="sig")
new <- data.frame(Quad=0,Var2 = runif(50,0,10))
p <- predict(model,newdata=new)
comments powered by Disqus