PoissonRMoE: Penalized MLE for the Poisson regularized Mixture of Experts.

Description Usage Arguments Value See Also

View source: R/PoissonRMoE.R

Description

This function provides a penalized MLE for the Poisson regularized Mixture of Experts (MoE) model corresponding with the penalty parameters Lambda, Gamma.

Usage

1
2
PoissonRMoE(Xmat, Ymat, K, Lambda, Gamma, option = FALSE,
  verbose = TRUE)

Arguments

Xmat

Matrix of explanatory variables. Each feature should be standardized to have mean 0 and variance 1. One must add the column vector (1,1,...,1) for the intercept variable.

Ymat

Vector of the response variable. For the Gaussian case Y should be standardized. For multi-logistic model Y is numbered from 1 to R (R is the number of labels of Y).

K

Number of experts (K > 1).

Lambda

Penalty value for the experts.

Gamma

Penalty value for the gating network.

option

Optional. option = TRUE: using proximal Newton-type method; option = FALSE: using proximal Newton method.

verbose

Optional. A logical value indicating whether or not values of the log-likelihood should be printed during EM iterations.

Value

PoissonRMoE returns an object of class PRMoE.

See Also

PRMoE


fchamroukhi/HDME documentation built on Nov. 4, 2019, 12:37 p.m.