This function allows for fitting Dirichlet regression models using two different parametrizations.

1 2 |

`formula` |
the model formula (for different specifications see “Details”) |

`data` |
a |

`model` |
specifies whether the |

`subset` |
estimates the model for a subset of the data |

`sub.comp` |
analyze a subcomposition by selecting specific components (see “Details”) |

`base` |
redefine the base variable |

`weights` |
frequency weights |

`control` |
a list containing control parameters used for the optimization |

`verbosity` |
prints information about the function's progress, see Details |

`formula`

determines the used predictors.
The responses **must** be prepared by `DR_data`

and can be optionally stored in the object containing all covariates which is then specified as the argument `data`

.
(Although “on-the-fly” processing of `DR_data`

in a formula works, it is only intended for testing purposes and may be removed at any time – use at your own risk.)

There are two different parametrization (controlled by the argument `model`

, see below):

the

*“common”*param. that models each*alpha*by an (possibly individual) set of predictors, andthe

*“alternative”*param. that models expected values (*mu*; as in multinomial logistic regression) and precision parameters (*phi*) with two sets of predictors.

As the two models offer different modeling strategies, the specification of their formulae differ:

The simplest possible model here is to include only an intercept for all components.
If `DV`

is the *‘dependent variable’* (i.e., compositional data) with three components, we can request this null-model by `DV ~ 1`

.
We always have at least two dependent variables, so simple formulae as the one given above will be expanded to `DV ~ 1 | 1 | 1`

, because `DV`

hast three components.
Likewise, it is possible to specify a common set of predictors for all components, as in `DV ~ p1 * p2`

, where `p1`

and `p2`

are predictors.

If the covariates of the components shall differ, one has to set up a complete formula for each subcomposition, using `|`

as separators between the components, for example, `DV ~ p1 | p1 + p2 | p1 * p2`

will lead to a model where the first response in `DV`

will be modeled using `p1`

, the second will be predicted by `p1 + p2`

and the third by `p1 * p2`

.
Note that if you use the latter approach, the predictors have to be stated
explicitly for all response variables.

The simplest possible model here is to include an intercept for all components (except the base) and an intercept for precision. This can be achieved by `DV ~ 1`

, which is expanded to `DV ~ 1 | 1`

. The part modeling the ‘mean’ (first element on the right-hand side) is mandatory, if no specification for precision is included, an intercept will be added. Note that you need to set `model = "alternative"`

to use this parametrization!

The alternative parametrization consists of two parts: modeled expected values (*mu*) and their ‘precision’ (*phi*).
As in multinomial logistic regression, one response variable is omitted (by default the first, but this can be changed by the `base`

argument in `DR_data`

or `DirichReg`

) and for the rest a set of predictors is used with a multinomial logit-link.
For precisions, a different set of predictors can be set up using a log-link.

`DV ~ p1 * p2 | p1 + p2`

will set up a model where the expected values are predicted by `p1 * p2`

and precision are modeled using `p1 + p2`

.

The `data`

argument accepts a `data.frame`

that **must** include the dependent variable as a named element (see examples how to do this).

The base-component (i.e., omitted component) is initially set during the stage of data preparation `DR_data`

, but can easily be changed using the argument `base`

which takes integer values from 1 to the maximum number of components.

If a data set contains a large number of components, of which only a few are relevant, the latter can be ‘sorted out’ and the irrelevant (i.e., not selected) components will be aggregated into a single variable (row sums) that automatically becomes the base category for the model, unless specified otherwise by `base`

. The positioning of variables will necessarily change: the aggregated variable takes the first column and the others are appended in their order of selection.

Using `subset`

, the model can be fitted only to a part of the data, for more information about this functionality, see `subset`

.

Note that, unlike in `glm`

, `weights`

are **not** treated as prior weights, but as frequency weights!

Using the `control`

argument, the settings passed to the optimizers can be altered.
This argument takes a named list.
To supply user-defined starting values, use `control = list(sv=c(...))`

and supply a vector containing initial values for all parameters.
Optimizer-specific options include the number of iterations (`iterlim = 1000`

) and convergence criteria for the BFGS- and NR-optimization ((`tol1 = 1e-5`

) and (`tol2 = 1e-10`

)).

Verbosity takes integer values from `0`

to `4`

.
`0`

, no information is printed (default).
`1`

prints information about 3 stages (preparation, starting values, estimation).
`2`

prints little information about optimization (`verbosity`

values greater than one are passed to `print.default = verbosity - 1`

of `maxBFGS`

and `maxNR`

).
`3`

prints more information about optimization.
`4`

prints all information about optimization.

`call` |
[ |

`parametrization` |
[ |

`varnames` |
[ |

`n.vars` |
[ |

`dims` |
[ |

`Y` |
[ |

`X` |
[ |

`Z` |
[ |

`sub.comp` |
[ |

`base` |
[ |

`weights` |
[ |

`orig.resp` |
[ |

`data` |
[ |

`d` |
[ |

`formula` |
[ |

`mf_formula` |
[ |

`npar` |
[ |

`coefficients` |
[ |

`coefnames` |
[ |

`fitted.values` |
[ |

`logLik` |
[ |

`vcov` |
[ |

`hessian` |
[ |

`se` |
[ |

`optimization` |
[ |

Marco J. Maier

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | ```
ALake <- ArcticLake
ALake$Y <- DR_data(ALake[,1:3])
# fit a quadratic Dirichlet regression models ("common")
res1 <- DirichReg(Y ~ depth + I(depth^2), ALake)
# fit a Dirichlet regression with quadratic predictor for the mean and
# a linear predictor for precision ("alternative")
res2 <- DirichReg(Y ~ depth + I(depth^2) | depth, ALake, model="alternative")
# test both models
anova(res1, res2)
res1
summary(res2)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.