The function computes the EP approximation of a logit regression with Gaussian prior. The user must specify the design matrix, the response vector and the prior variance. For more information on the default prior variance see Chopin and Ridgway [2015].

1 | ```
EPlogit(X, Y, s)
``` |

`X` |
Design matrix. The matrix should include a constant column if a bias is to be considered. |

`Y` |
Response vector. The vector should take values in 0,1. |

`s` |
Prior variance. The prior is taken to be spherical Gaussian, the variance must therefore be specified in the form of a scalar. For default choices see Chopin and Ridgway [2015]. |

The implementation is based on the remarks of Chopin and Ridgway (2015) and computes a Gaussian approximation to the Bayesian logit model. The approximation can serve as a very efficient estimation or as the starting point to Monte Carlo algorithms. The output value is given in the form of the parameters of the Gaussian approximation (mean and variance matrix) and an approximation to the log marginal likelihood.

`m` |
Mean of the Gaussian approximation |

`V` |
Variance matrix of the Gaussian approximation |

`Z` |
Approximated log marginal likelihood |

The current implementation does not include damping or the possibility to use fractional EP (hopefully it will in a future version). This might results in poor performance for large datasets.

More priors and models should be available shortly.

James Ridgway

N. Chopin and J. Ridgway. Leave Pima Indians alone: binary regression as a benchmark for Bayesian computation. arxiv:1506.08640

1 2 3 4 | ```
data(Pima.tr)
Y<-as.matrix(as.numeric(Pima.tr[,8]))-1
X<-cbind(1,data.matrix(Pima.tr[,1:7]))
Sol<-EPlogit(X,Y,100)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.