sllim | R Documentation |
EM Algorithm for Student Locally Linear Mapping
sllim(tapp,yapp,in_K,in_r=NULL,maxiter=100,Lw=0,cstr=NULL,verb=0,in_theta=NULL,
in_phi=NULL)
tapp |
An |
yapp |
An |
in_K |
Initial number of components |
in_r |
Initial assignments (default NULL) |
maxiter |
Maximum number of iterations (default 100). The algorithm stops if the number of iterations exceeds |
Lw |
Number of hidden components (default 0) |
cstr |
Constraints on |
verb |
Verbosity: print out the progression of the algorithm. If |
in_theta |
Initial parameters (default NULL), same structure as the output of this function |
in_phi |
Initial parameters (default NULL), same structure as the output of this function |
This function implements the robust counterpart of GLLiM model and should be applied when outliers are present in the data.
The SLLiM model implemented in this function addresses the following non-linear mapping issue:
E(Y | X=x) = g(x),
where Y
is a L-vector of multivariate responses and X
is a large D-vector of covariates' profiles such that D \gg L
. The methods implemented in this package aims at estimating the non linear regression function g
.
First, the methods of this package are based on an inverse regression strategy. The inverse conditional relation p(X | Y)
is specified in a way that the forward relation of interest p(Y | X)
can be deduced in closed-from. Under some hypothesis on covariance structures, the large number D
of covariates is handled by this inverse regression trick, which acts as a dimension reduction technique. The number of parameters to estimate is therefore drastically reduced. Second, we propose to approximate the non linear g
regression function by a piecewise affine function. Therefore, an hidden discrete variable Z
is introduced, in order to divide the space in K
regions such that an affine model holds between responses Y and variables X, in each region k
:
X = \sum_{k=1}^K I_{Z=k} (A_k Y + b_k + E_k)
where A_k
is a D \times L
matrix of coefficients for regression k
, b_k
is a D-vector of intercepts and E_k
is a noise with covariance matrix proportional to \Sigma_k
.
SLLiM is defined as the following hierarchical generalized Student mixture model for the inverse conditional density p(X | Y)
:
p(X=x | Y=y,Z=k; \theta,\phi) = S(x; A_kx+b_k,\Sigma_k,\alpha_k^x,\gamma_k^x)
p(Y=y | Z=k; \theta,\phi) = S(y; c_k,\Gamma_k,\alpha_k,1)
p(Z=k | \phi)=\pi_k
where (\theta,\phi)
are the sets of parameters \theta=(c_k,\Gamma_k,A_k,b_k,\Sigma_k)_{k=1}^K
and \phi=(\pi_k,\alpha_k)_{k=1}^K
. In the previous expression, \alpha_k
and (\alpha_k^x,\gamma_k^x)
determine the heaviness of the tail of the generalized Student distribution, which gives robustness to the model. Note that \alpha_k^x=\alpha_k + L/2
and \gamma_k^x=1 + 1/2 \delta(y,c_k,\Gamma_k)
where \delta
is the Mahalanobis distance.
The forward conditional density of interest can be deduced from these equations and is also a Student mixture of regressions model.
Like gllim
, sllim
allows the addition of latent variables in order to account for correlation among covariates or if it is supposed that responses are only partially observed. Adding latent factors is known to improve prediction accuracy, if Lw
is not too large with regard to the number of covariates. When latent factors are added, the dimension of the response is L=Lt+Lw
and L=Lt
otherwise.
For SLLiM, the number of parameters to estimate is:
(K-1)+ K(1+DL+D+L_t+ nbpar_{\Sigma}+nbpar_{\Gamma})
where L=L_w+L_t
and nbpar_{\Sigma}
(resp. nbpar_{\Gamma}
) is the number of parameters in each of the large (resp. small) covariance matrix \Sigma_k
(resp. \Gamma_k
). For example,
if the constraint on \Sigma_k
is cstr$Sigma="i"
, then nbpar_{\Sigma}=1
,which is the default constraint in the gllim
function
if the constraint on \Sigma_k
is cstr$Sigma="d"
, then nbpar_{\Sigma}=D
,
if the constraint on \Sigma_k
is cstr$Sigma=""
, then nbpar_{\Sigma}=D(D+1)/2
,
if the constraint on \Sigma_k
is cstr$Sigma="*"
, then nbpar_{\Sigma}=D(D+1)/(2K)
.
The rule to compute the number of parameters of \Gamma_k
is the same as \Sigma_k
, replacing D by L_t
. Currently the \Gamma_k
matrices are not constrained and nbpar_{\Gamma}=L_t(L_t+1)/2
because for indentifiability reasons the L_w
part is set to the identity matrix.
The user must choose the number of mixtures components K
and, if needed, the number of latent factors L_w
. For small datasets (less than 100 observations), we suggest to select both (K,L_w)
by minimizing the BIC criterion. For larger datasets, to save computation time, we suggest to set L_w
using BIC while setting K
to an arbitrary value large enough to catch non linear relations between responses and covariates and small enough to have several observations (at least 10) in each clusters. Indeed, for large datasets, the number of clusters should not have a strong impact on the results while it is sufficiently large.
Returns a list with the following elements:
LLf |
Final log-likelihood |
LL |
Log-likelihood value at each iteration of the EM algorithm |
theta |
A list containing the estimations of parameters as follows: |
c |
An |
Gamma |
An |
A |
An |
b |
An |
Sigma |
An |
nbpar |
The number of parameters estimated in the model |
phi |
A list containing the estimations of parameters as follows: |
r |
An |
pi |
A vector of length |
alpha |
A vector of length |
Emeline Perthame (emeline.perthame@inria.fr), Florence Forbes (florence.forbes@inria.fr), Antoine Deleforge (antoine.deleforge@inria.fr)
[1] A. Deleforge, F. Forbes, and R. Horaud. High-dimensional regression with Gaussian mixtures and partially-latent response variables. Statistics and Computing, 25(5):893–911, 2015.
[2] E. Perthame, F. Forbes, and A. Deleforge. Inverse regression approach to robust nonlinear high-to-low dimensional mapping. Journal of Multivariate Analysis, 163(C):1–14, 2018. https://doi.org/10.1016/j.jmva.2017.09.009
xLLiM-package
, emgm
, sllim_inverse_map
, gllim
data(data.xllim)
responses = data.xllim[1:2,] # 2 responses in rows and 100 observations in columns
covariates = data.xllim[3:52,] # 50 covariates in rows and 100 observations in columns
## Setting 5 components in the model
K = 5
## the model can be initialized by running an EM algorithm for Gaussian Mixtures (EMGM)
r = emgm(rbind(responses, covariates), init=K);
## and then the sllim model is estimated
mod = sllim(responses,covariates,in_K=K,in_r=r);
## if initialization is not specified, the model is automatically initialized by EMGM
## mod = sllim(responses,covariates,in_K=K)
## Adding 1 latent factor
## mod = sllim(responses,covariates,in_K=K,in_r=r,Lw=1)
## Some constraints on the covariance structure of \eqn{X} can be added
## mod = sllim(responses,covariates,in_K=K,in_r=r,cstr=list(Sigma="i"))
# Isotropic covariance matrices
# (same variance among covariates but different in each component)
## mod = sllim(responses,covariates,in_K=K,in_r=r,cstr=list(Sigma="d"))
# Heteroskedastic covariance matrices
# (variances are different among covariates and in each component)
## mod = sllim(responses,covariates,in_K=K,in_r=r,cstr=list(Sigma=""))
# Unconstrained full covariance matrices
## mod = sllim(responses,covariates,in_K=K,in_r=r,cstr=list(Sigma="*"))
# Full covariance matrices but equal for all components
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.