# Bayesian model choice procedure for the linear model

### Description

This function computes the posterior probabilities of all (for less than 15 covariates) or the most probable (for more than 15 covariates) submodels obtained by eliminating some covariates.

### Usage

1 2 | ```
ModChoBayesReg(y, X, g = length(y), betatilde = rep(0, dim(X)[2]),
niter = 1e+05, prt = TRUE)
``` |

### Arguments

`y` |
response variable |

`X` |
covariate matrix |

`g` |
constant in the |

`betatilde` |
prior expectation of the regression coefficient |

`niter` |
number of Gibbs iterations in the case there are more than 15 covariates |

`prt` |
boolean variable for printing the standard output |

### Details

When using a conjugate prior for the linear model such as the *G* prior,
the marginal likelihood and hence the evidence are available in closed form. If the number
of explanatory variables is less than 15, the exact
derivation of the posterior probabilities for all submodels can be undertaken.
Indeed, *2^15=32768* means that the problem remains tractable.
When the number of explanatory variables gets larger, a random exploration of the collection
of submodels becomes necessary, as explained in the book (Chapter 3). The proposal to change
one variable indicator is made at random and accepting this move follows from a Metropolisâ€“Hastings
step.

### Value

`top10models ` |
models with the ten largest posterior probabilities |

`postprobtop10 ` |
posterior probabilities of those ten most likely models |

### Examples

1 2 3 4 | ```
data(caterpillar)
y=log(caterpillar$y)
X=as.matrix(caterpillar[,1:8])
res2=ModChoBayesReg(y,X)
``` |