Description Usage Arguments Value See Also
This function provides a penalized MLE for the Poisson regularized Mixture of Experts (MoE) model corresponding with the penalty parameters Lambda, Gamma.
1 2 | PoissonRMoE(Xmat, Ymat, K, Lambda, Gamma, option = FALSE,
verbose = TRUE)
|
Xmat |
Matrix of explanatory variables. Each feature should be standardized to have mean 0 and variance 1. One must add the column vector (1,1,...,1) for the intercept variable. |
Ymat |
Vector of the response variable. For the Gaussian case Y should be standardized. For multi-logistic model Y is numbered from 1 to R (R is the number of labels of Y). |
K |
Number of experts (K > 1). |
Lambda |
Penalty value for the experts. |
Gamma |
Penalty value for the gating network. |
option |
Optional. |
verbose |
Optional. A logical value indicating whether or not values of the log-likelihood should be printed during EM iterations. |
PoissonRMoE returns an object of class PRMoE.
PRMoE
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.