Description Usage Methods (by generic) Slots
A S4 class to represent an Extreme Learning Machine (ELM) model
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | ## S4 method for signature 'elm'
add_neurons(object, act_fun, nn, w_in = NULL, b = NULL)
## S4 method for signature 'elm'
initialize(.Object = object, inputs = 0, outputs = 0)
## S4 method for signature 'elm'
show(object)
## S4 method for signature 'elm'
get_error(object, n_sel, h, y, h_val = NULL, y_val = NULL,
cv_rows = NULL)
## S4 method for signature 'elm'
mse(object, y, yp, x)
## S4 method for signature 'elm'
class_postprocess(object, yp, class_output, ml_threshold)
## S4 method for signature 'elm'
rank_neurons(object, nn_max, h = NULL, y = NULL)
## S4 method for signature 'elm'
train(object, x, y, x_val = NULL, y_val = NULL,
type = "reg", tune = "none", ranking = "random", validation = "none",
folds = 10, class_weights = NULL, ...)
## S4 method for signature 'elm'
project(object, x, rbf_dist = "euclidean")
## S4 method for signature 'elm'
solve_system(object, h, y, solve = TRUE)
## S4 method for signature 'elm'
train_pruning(object, h, y, h_val = NULL, y_val = NULL,
cv_rows = NULL)
## S4 method for signature 'elm'
prune(object, n_sel)
|
add_neurons
: add neurons of the same type of activation function to the hidden layer
initialize
: initalize an object of class elm
show
: display an object of class elm
get_error
: implement a validation procedure
mse
: MSE error
class_postprocess
: Description of class_postprocess for ELM (origin predic.R)
rank_neurons
: rank neurons of a ELM
train
: train the elm
project
: project form input-space to neuron-space. Compute H
solve_system
: solve linear system H x Wout = Y
train_pruning
: Optimization procedure for obtaining the optimial number of neurons for pruning.
prune
: Prune the hidden layer of a elm
inputs
The number of input features.
outputs
The number of outputs.
h_neurons
An object of classs hiddenlayer
w_out
The weight output vector that includes the computed weights between the hidden and the output layer.
type
The type of model implemented:
"reg": regression problem.
"class_mc": multi-class: the sample belongs to 1 class out of n.
"class_ml": multi-label: the sample can belong to m classes out of n (m<n).
"class_w": weigted classification
tune
Parameter to define the model structure selection method implemented to tune the model hyper-parameters #'
"none": no model selection
"pruning": pruning of neurons of the hidden layer: P-ELM, if "ridge = 0 & ranking = "random", OP-ELM, if "ridge = 0 & ranking = lars", TROP-ELM, if ("ridge != 0 & ranking = lars)
ranking
A character to select the type of ranking implemented when prunning option is selected.
"random" - random ranking
"lars" - ranking based on lars - L1 penalty
results
The error used to evaluate model performance. mse c(mse_train, mse_val)
ridge
The regularization parameter used to include the L2 penalty the#'
validation
The validation procedure used for developing the model. #'
"none" - no validation process <<<<<<ANDRES<<<<<<
"v" - validation. Xv and Yv are required
"cv" - cross validation. The number of folds is required
"loo" - leave one out based on the PRESS statistic
folds
The number number of folds for the cross-validation procedure.
class_weights
numeric vector of length = number_of_classes with the weigths for weighted type
batch
The size of the bacth in an adaptative ELM.
time_exec
The time of calculation for training the model.
bigdata
An logical parameter to select the kind of acceleration used in case of solving big data problems.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.