This is the internal function that implements the model proposed by L. X. Wang and J. M.
Mendel. It is used to solve regression task. Users do not need to call it directly,
but just use frbs.learn
and predict
1 2 3 
data.train 
a matrix (m \times n) of normalized data for the training process, where m is the number of instances and n is the number of variables; the last column is the output variable. Note the data must be normalized between 0 and 1. 
num.labels 
a matrix (1 \times n), whose elements represent the number of labels (linguistic terms); n is the number of variables. 
type.mf 
the type of the membership function. See 
type.tnorm 
a value which represents the type of tnorm. See 
type.implication.func 
a value representing type of implication function. Let us consider a rule, a \to b,

classification 
a boolean representing whether it is a classification problem or not. 
range.data 
a matrix representing interval of data. 
The fuzzy rulebased system for learning from L. X. Wang and J. M. Mendel's paper is implemented in this function. For the learning process, there are four stages as follows:
Step 1:
Divide equally the input and output spaces of the given numerical data into
fuzzy regions as the database. In this case, fuzzy regions refers to intervals for each
linguistic term. Therefore, the length of fuzzy regions represents the number of
linguistic terms. For example, the linguistic term "hot" has the fuzzy region [1, 3].
We can construct a triangular membership function having the corner points
a = 1, b = 2, and c = 3 where b is a middle point
that its degree of the membership function equals one.
Step 2:
Generate fuzzy IFTHEN rules covering the training data,
using the database from Step 1. First, we calculate degrees of the membership function
for all values in the training data. For each instance in the training data,
we determine a linguistic term having a maximum degree in each variable.
Then, we repeat the process for each instance in the training data to construct
fuzzy rules covering the training data.
Step 3:
Determine a degree for each rule.
Degrees of each rule are determined by aggregating the degree of membership functions in
the antecedent and consequent parts. In this case, we are using the product aggregation operators.
Step 4:
Obtain a final rule base after deleting redundant rules.
Considering degrees of rules, we can delete the redundant rules having lower degrees.
The outcome is a Mamdani model. In the prediction phase, there are four steps: fuzzification, checking the rules, inference, and defuzzification.
L.X. Wang and J.M. Mendel, "Generating fuzzy rule by learning from examples", IEEE Trans. Syst., Man, and Cybern., vol. 22, no. 6, pp. 1414  1427 (1992).
frbs.learn
, predict
and frbs.eng
.
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.