Description Usage Arguments Value See Also
This function provides a penalized MLE for the regularized Mixture of Experts (MoE) model corresponding with the penalty parameters Lambda, Gamma.
1 |
Xm |
Matrix of explanatory variables. Each feature should be standardized to have mean 0 and variance 1. One must add the column vector (1,1,...,1) for the intercept variable. |
Ym |
Vector of the response variable. For the Gaussian case Y should be standardized. For multi-logistic model Y is numbered from 1 to R (R is the number of labels of Y). |
K |
Number of experts (K > 1). |
Lambda |
Penalty value for the experts. |
Gamma |
Penalty value for the gating network. |
option |
Optional. |
verbose |
Optional. A logical value indicating whether or not values of the log-likelihood should be printed during EM iterations. |
GaussRMoE returns an object of class GRMoE.
GRMoE
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.