EM: EM Function

View source: R/EM.R

EMR Documentation

EM Function

Description

This function realized EM algorithm (Expectation-Maximization algorithm) for data clustering. In this case, for mixture models.

Usage

EM(x0, k)

Arguments

x0

input data (vector)

k

amount of clusters (mixture components)

Details

In initial step (Step_0) explorer must determine the initial gaussian mixture model parameters. Explorer can take any random parameters or (using surface analysis and some assumptions) take anothers.

In the first step (called E_Step) with function E_Step explorer calculate the posterior probabilities (which named "posterior.df") for each item of initial dataset.

In the second step (called M_Step) with function M_Step explorer update component parameters (using likelihood function).

After that, explorer repeat M_Step (100 times or until difference between current value of logarithm of likelihood function and previous value of logarithm of likelihood function will be less than 10^(-6) in other words: logarithm of likelihood function didn’t change much).

In the end explorer has updated probabilities for each item and updated parameters for each distribution (explorer should look at E.Step and M.Step)

Value

Parameters for each distribution

Author(s)

hdrbv

Examples

For example we want to separate 2 Gaussian distributions
and estimate parameters of each one.
Let us assume that vector x1 - mixture of these distributions. 
So we can use EM algorithm here:   
EM1 <- sepro::EM(x0 = x0, k = 2).

hdrbv/sepro documentation built on June 13, 2022, 9:09 a.m.