This is the internal function that implements the simplified TSK fuzzy rule generation method
using heuristics and gradient descent method (FS.HGD). It is used to solve regression tasks.
Users do not need to call it directly,
but just use `frbs.learn`

and `predict`

.

1 2 3 | ```
FS.HGD(data.train, num.labels, max.iter = 100, step.size = 0.01,
alpha.heuristic = 1, type.tnorm = "MIN", type.snorm = "MAX",
type.implication.func = "ZADEH")
``` |

`data.train` |
a matrix ( |

`num.labels` |
a matrix ( |

`max.iter` |
maximal number of iterations. |

`step.size` |
step size of the descent method. |

`alpha.heuristic` |
a positive real number which is the heuristic parameter. |

`type.tnorm` |
the type of t-norm. For more detail, please have a look at |

`type.snorm` |
the type of s-norm. For more detail, please have a look at |

`type.implication.func` |
a value representing type of implication function.
For more detail, please have a look at |

This method was proposed by K. Nozaki, H. Ishibuchi, and H. Tanaka. It uses fuzzy IF-THEN rules with nonfuzzy singletons (i.e. real numbers) in the consequent parts. The techniques of space partition are implemented to generate the antecedent part, while the initial consequent part of each rule is determined by the weighted mean value of the given training data. Then, the gradient descent method updates the value of the consequent part. Futhermore, the heuristic value given by the user affects the value of weight of each data.

H. Ishibuchi, K. Nozaki, H. Tanaka, Y. Hosaka, and M. Matsuda, "Empirical study on learning in fuzzy systems by rice taste analysis", Fuzzy Set and Systems, vol. 64, no. 2, pp. 129 - 144 (1994).

`frbs.learn`

, `predict`

, and `HGD.update`

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.