Statistical Model

The automated foehn classification foehnix is based on a two-component mixture model. The basic idea is that two unobservable components (or clusters) exist. One component for situations without foehn, one component for situations with foehn. foehnix uses an unsupervised statistical method to identify the two components based on a set of observed values (e.g., wind speed, gust speed, potential temperature differences) to model the probability whether or not a specific observation is related to a foehn event.

The statistical model consists of two parts: one part to identify the two components, and a second part modelling the probability whether or not a specific observation belongs to component 1 or component 2. The latter is known as the concomitant model.

The density of a two-component mixed distribution $h(\dots)$ in its general form is specified as follows for a specific observation $i$:

... where $\mathit{y}$ is the covariate for the first part of the statistical model to identify components 1 and 2, and $\mathbf{x}$ the covariates for the concomitant model. The density of the mixed distribution $h$ is the sum (or superposition) of the densities of the two components ($f$; i.e., Gaussian distribution) times the probability $\pi$ from the concomitant model which describes whether or not a specific observation belongs to component 2. $\mathit{\theta} = (\mathit{\theta}_1, \mathit{\theta}_2)$ are the distribution parameters of the components, $\mathit{\alpha}$ the parameters of the concomitant model.

The concomitant model can be any model which fulfills $\pi \in~]0,1[$, e.g., an constant value or intercept only model (mixture model without concomitants), or any kind of probability model. foehnix uses a logistic regression model of the following form:

The final foehn probability of the two-component mixture model, also known as the a-posteriori probability, is given by:

... where $\hat{\mathit{p}}$ in our case represents the probability of foehn. All one has to know are the parameters $\mathit{\theta}$ and $\mathit{\alpha}$ which can be estimated using an appropriate M-estimator such as maximum likelihood.

Parameter Estimation

The maximum likelihood of a mixture model can usually not be maximized directly. One possibility to estimate the coefficients of is an iterative expectation maximization (EM) algorithm. The EM algorithm otimizes the following log-likelihood:

with $\hat{\mathit{p}}$ as specified above (a-posteriori probability). $N$ represents the number of observations.

The EM algorithm is specified as follows:

The EM steps are repeated until the likelihood improvement falls below a certain threshold or the maximum number of iterations is reached.

Gaussian Mixture Model Without Concomitants

The simplest case is a Gaussian two-component mixture model without concomitants. In this case the density of the two components is the density $\phi$ of the Gaussian distribution with its parameters $\mathit{\theta}1 = (\mu_1, \sigma_1)$ and $\mathit{\theta}_2 = (\mu_2, \sigma_2)$ where $\mu$ and $\sigma$ are the _location and scale parameter of the Gaussian distribution, or mean and standard deviation.

Initialization step

First, initial values for the parameters ($\mathit{\theta}$) and the posterior weights ($\hat{\mathit{p}}$) have to be specified. $\mathit{\alpha}$ does not have to be initialized as no concomitant model is used in this case! To be able to do so we have to attribute each observation $\mathit{y}i \forall i = 1, \dots, N$ to one of the two components. This initial membership will be denoted as $\mathit{z}$ and takes 1 if observation $y_i$ belongs to _component 2 and 0 else. This initial attribution defines that observations with high values of $\mathit{y}$ belong to component 2, observations with low values of $\mathit{y}$ to component 1.

Note: Depending on the model specification this can lead to models where the probability for no foehn will be returned by foehnix rather than posteriori probability of foehn. However, the switch argument of the foehnix(...) function allows you to control this behavior (see foehnix manual page).

foehnix uses the following initialization for the two-component Gaussian mixture model without concomitants:

  1. Initialize class membership: $z_i = \begin{cases}1 & \text{if}~y_i \ge \bar{y} \ 0 & \text{else}\end{cases}$
  2. Initial parameters for $\mathbf{\theta}^{(0)}$ using weighted empirical moments for $\mu_1$, $\mu_2$ and the standard deviation of $y$ as initial guess for $\sigma_1$ and $\sigma_2$:
    • $\mu_1^{(0)} = \frac{1}{\sum_{i=1}^{N} (1-z_i)} \sum_{i=1}^{N} (1-z_i) \cdot y_i$
    • $\mu_2^{(0)} = \frac{1}{\sum_{i=1}^{N} z_i} \sum_{i=1}^{N} z_i \cdot y_i$
    • $\sigma_1^{(0)} = \sigma_2^{(0)} = \big(\frac{1}{N} \sum_{i=1}^{N} (y_i - \bar{y})^2\big)^\frac{1}{2}$
  3. Initialize $\mathit{\pi}^{(0)} = 0.5$
  4. Given $\mathit{\theta}^{(0)}$ and $\mathit{\pi}^{(0)}$: calculate a-posteriory probability:
    • $\hat{\mathit{p}}^{(0)} = \frac{\mathit{\pi}^{(0)} \cdot \phi(\mathit{y}, \mathit{\theta}_2^{(0)})}{ (1 - \mathit{\pi}^{(0)}) \cdot \phi(\mathit{y}, \mathit{\theta}_1^{(0)}) + \mathit{\pi}^{(0)} \cdot \phi(\mathit{y}, \mathit{\theta}_2^{(0)})}$

Once the required elements have been initialized start the EM algorithm for $j = 1, ..., maxit$:

  1. Update $\pi^{(j)} = \text{mean}(\hat{\mathit{p}}^{(j-1)})$
  2. Obtain new $\mathit{\theta}^{(j)}$ using $\hat{\mathit{p}}^{(j-1)}$:
    • $\mu_1^{(j)} = \frac{1}{\sum_{i=1}^{N} (1 - \hat{p}i^{(j-1)})} \sum{i=1}^{N} (1 - \hat{\mathit{p}}_i^{(j-1)}) \cdot y_i$
    • $\mu_2^{(j)} = \frac{1}{\sum_{i=1}^{N} \hat{p}i^{(j-1)}} \sum{i=1}^{N} \hat{\mathit{p}}_i^{(j-1)} \cdot y_i$
    • $\sigma_1^{(j)} = \Big(\frac{1}{\sum_{i=1}^{N} (1-\hat{p}i^{(j-1)})} \sum{i=1}^{N} (1 - \hat{p}_i^{(j-1)}) \cdot (y_i - \bar{y})^2\Big)^\frac{1}{2}$
    • $\sigma_2^{(j)} = \Big(\frac{1}{\sum_{i=1}^{N} \hat{p}i^{(j-1)}} \sum{i=1}^{N} \hat{p}_i^{(j-1)} \cdot (y_i - \bar{y})^2\Big)^\frac{1}{2}$
  3. Update posterior probabilities $\hat{\mathit{p}}^{(j)}$:
    • $\hat{\mathit{p}}^{(j)} = \frac{\mathit{\pi}^{(j)} \cdot \phi(\mathit{y}, \mathit{\theta}_2^{(j)})}{(1 - \mathit{\pi}^{(j)}) \cdot \phi(\mathit{y}, \mathit{\theta}_1^{(j)}) + \mathit{\pi}^{(j)} \cdot \phi(\mathit{y}, \mathit{\theta}_2^{(j)})}$
  4. Calculate likelihood: $\ell^{(j)}$. If $j = 1$ proceed with step 5.
  5. For $j > 1$: if $(\ell^{(j)} - \ell^{(j-1)}) < \text{tol}$ the likelihood could not have been improved in iteration $j$ (converged or stuck): stop EM algorithm and return parameters of iteration $j-1$. If $j = \text{maxit}$: maximum number of iterations reached, stop EM algorithm, return parameters of iteration $j$. Else proceed with step 5 until one of the stopping criteria is reached.

Gaussian Mixture Model With Concomitants

The optimizer for a two-component Gaussian mixture model with additional concomitants is very similar except that we also have to update the concomitant model (logistic regression model). For mixed models with concomitants the probabilities $\mathit{\pi}$ are a function of the concomitant covariates $\mathbf{x}$ and the regression coefficients $\mathit{\alpha}$.

The following algorithm is used:

  1. Initialize class membership $\mathit{z}$ as for the Gaussian mixture model without concomitants.
  2. Initialize coefficients $\mathit{\theta}^{(0)}$ as for the Gaussian mixture model without concomitants.
  3. Given $\mathit{z}^{(0)}$ and $\mathbf{x}$: estimate logistic regression model to obtain the parameters $\mathit{\alpha}^{(0)}$, calculate $\mathit{\pi}^{(0)} = \frac{\exp(\mathbf{x}^\top \mathit{\alpha})}{1 + \exp(\mathbf{x}^\top \mathit{\alpha})}$ (see logistic regression vignette).
  4. Calculate a-posteriori probability $\hat{\mathit{p}}^{(0)}$ as for the Gaussian mixture model without concomitants.

The EM algorithm for $j = 1, \dots, \text{maxit}$:

  1. Update $\pi^{(j)}$ by updating the concomitant model (logistic regression model) using $\hat{\mathit{p}}^{(j-1)}$ as response for the concomitant model (see logistic regression vignette).
  2. Obtain new $\mathit{\theta}^{(j)}$ as for the Gaussian mixture model without concomitants.
  3. Update posterior probabilities $\hat{\mathit{p}}^{(j)}$ as for the Gaussian mixture model without concomitants.
  4. Calculate likelihood as for the Gaussian mixture model without concomitants.
  5. As for the Gaussian mixture model without concomitants: proceed with step 5 until one of the stopping criteria is reached.

Logistic Mixture Model

The logistic two-component mixture models can be estimated as the Gaussian ones except that component density is the density of the logistic distribution, and that the weighted empirical moments for $\sigma_1$ and $\sigma_2$, the scale of the logistic distribution, is now:

Censored and Truncated Models

In case of a censored or truncated mixed model the distributional parameters $\mathit{\theta}$ of the components of the mixture model cannot be calculated using weighted empirical moments. In these cases a numreical likelihood-based solver is used to estimate $\mu_1$, $\mu_2$, $\sigma_1$, and $\sigma_2$.



retostauffer/Rfoehnix documentation built on June 5, 2023, 11:39 p.m.