Description Usage Arguments Details Value Author(s) References See Also Examples
Sample without replacement from a posterior distribution on models
1 2 3 4 5 6  bas.lm(formula, data, subset, weights, na.action = "na.omit",
n.models = NULL, prior = "ZSnull", alpha = NULL,
modelprior = beta.binomial(1, 1), initprobs = "Uniform", method = "BAS",
update = NULL, bestmodel = NULL, prob.local = 0, prob.rw = 0.5,
MCMC.iterations = NULL, lambda = NULL, delta = 0.025, thin = 1,
renormalize = FALSE)

formula 
linear model formula for the full model with all predictors, Y ~ X. All code assumes that an intercept will be included in each model and that the X's will be centered. 
data 
a data frame. Factors will be converted to numerical vectors based on the using 'model.matrix'. 
subset 
an optional vector specifying a subset of observations to be used in the fitting process. 
weights 
an optional vector of weights to be used in the fitting process. Should be NULL or a numeric vector. If nonNULL, Bayes estimates are obtained assuming that Y ~ N(Xb, sigma^2 diag(1/weights)). 
na.action 
a function which indicates what should happen when the data contain NAs. The default is "na.omit". 
n.models 
number of models to sample either without replacement (method="BAS" or "MCMC+BAS") or with replacement (method="MCMC"). If NULL, BAS with method="BAS" will try to enumerate all 2^p models. If enumeration is not possible (memory or time) then a value should be supplied which controls the number of sampled models using 'n.models'. With method="MCMC", sampling will stop once the min(n.models, MCMC.iterations) occurs so MCMC.iterations be significantly larger than n.models in order to explore the model space. On exit for method= "MCMC" this is the number of unique models that have been sampled with counts stored in the output as "freq". 
prior 
prior distribution for regression coefficients. Choices include

alpha 
optional hyperparameter in gprior or hyper gprior. For Zellner's gprior, alpha = g, for the Liang et al hyperg or hypergn method, recommended choice is alpha are between (2 < alpha < 4), with alpha = 3 the default. For the ZellnerSiow prior alpha = 1 by default, but can be used to modify the rate parameter in the gamma prior on g, 1/g ~ G(1/2, n*alpha/2) so that beta ~ C(0, sigma^2 alpha (X'X/n)^1). 
modelprior 
Family of prior distribution on the models. Choices
include 
initprobs 
Vector of length p or a character string specifiny which
method is used to create the vector. This is used to order variables for
sampling all methods for potentially more efficient storage while sampling
and provides the initial inclusion probabilities used for sampling without
replacement with method="BAS". Options for the charactier string giving the
method are: "Uniform" or "uniform" where each predictor variable is equally
likely to be sampled (equivalent to random sampling without replacement);
"eplogp" uses the 
method 
A character variable indicating which sampling method to use:

update 
number of iterations between potential updates of the sampling probabilities for method "BAS" or "MCMC+BAS". If NULL do not update, otherwise the algorithm will update using the marginal inclusion probabilities as they change while sampling takes place. For large model spaces, updating is recommended. If the model space will be enumerated, leave at the default. 
bestmodel 
optional binary vector representing a model to initialize the sampling. If NULL sampling starts with the null model 
prob.local 
A future option to allow sampling of models "near" the median probability model. Not used at this time. 
prob.rw 
For any of the MCMC methods, probability of using the randomwalk Metropolis proposal; otherwise use a random "flip" move to propose swap a variable that is excluded with a variable in the model. 
MCMC.iterations 
Number of iterations for the MCMC sampler; the default is n.models*10 if not set by the user. 
lambda 
Parameter in the AMCMC algorithm (depracated). 
delta 
truncation parameter to prevent sampling probabilities to degenerate to 0 or 1 prior to enumeration for sampling without replacement. 
thin 
For "MCMC", thin the MCMC chain every "thin" iterations; default is no thinning. For large p, thinning can be used to significantly reduce memory requirements as models and associated summaries are saved only every thin iterations. For thin = p, the model and associated output are recorded every p iterations, similar to the Gibbs sampler in SSVS. 
renormalize 
For MCMC sampling, should posterior probabilities be
based on renormalizing the marginal likelihoods times prior probabilities
(TRUE) or frequencies from MCMC. The latter are unbiased in long runs,
while the former may have less variability. May be compared via the
diagnostic plot function 
BAS provides several algorithms to sample from posterior distributions
of models for
use in Bayesian Model Averaging or Bayesian variable selection. For p less than
2025, BAS can enumerate all models depending on memory availability. As BAS saves all
models, MLEs, standard errors, log marginal liklihoods, prior and posterior and probabilities
memory requirements grow linearly with M*p where M is the number of models
and p is the number of predictors. For example, enumeration with p=21 with 2,097,152 takes just under
2 Gigabytes on a 64 bit machine to store all summaries that would be needed for model averaging.
(A future version will likely include an option to not store all summaries if
users do not plan on using model averaging or model selection on Best Predictive models.)
For larger p, BAS samples without replacement using random or deterministic
sampling. The Bayesian Adaptive Sampling algorithm of Clyde, Ghosh, Littman
(2010) samples models without replacement using the initial sampling
probabilities, and will optionally update the sampling probabilities every
"update" models using the estimated marginal inclusion probabilties. BAS
uses different methods to obtain the initprobs
, which may impact the
results in highdimensional problems. The deterinistic sampler provides a
list of the top models in order of an approximation of independence using
the provided initprobs
. This may be effective after running the
other algorithms to identify high probability models and works well if the
correlations of variables are small to modest.
We recommend "MCMC" for
problems where enumeration is not feasible (memory or time constrained)
or even modest p if the number of
models sampled is not close to the number of possible models and/or there are significant
correlations among the predictors as the bias in estimates of inclusion
probabilties from "BAS" or "MSMS+BAS" may be large relative to the reduced
variability from using the normalized model probabilites as shown in Clyde and Ghosh, 2012.
Diagnostic plots with MCMC can be used to assess convergence.
For large problems we recommend thinning with MCMC to reduce memory requirements.
The priors on coefficients
include Zellner's gprior, the Hyperg prior (Liang et al 2008, the
ZellnerSiow Cauchy prior, Empirical Bayes (local and gobal) gpriors. AIC
and BIC are also included, while a range of priors on the model space are available.
bas
returns an object of class bas
An object of class BAS
is a list containing at least the following
components:
postprob 
the posterior probabilities of the models selected 
priorprobs 
the prior probabilities of the models selected 
namesx 
the names of the variables 
R2 
R2 values for the models 
logmarg 
values of the log of the marginal likelihood for the models. This is equivalent to the log Bayes Factor for compaing each model to a base model with intercept only. 
n.vars 
total number of independent variables in the full model, including the intercept 
size 
the number of independent variables in each of the models, includes the intercept 
which 
a list of lists with one list per model with variables that are included in the model 
probne0 
the posterior probability that each variable is nonzero computed using the renormalized marginal likelihoods of sampled models. This may be biased if the number of sampled models is much smaller than the total number of models. Unbiased estimates may be obtained using method "MCMC". 
mle 
list of lists with one list per model giving the MLE (OLS) estimate of each (nonzero) coefficient for each model. NOTE: The intercept is the mean of Y as each column of X has been centered by subtracting its mean. 
mle.se 
list of lists with one list per model giving the MLE (OLS) standard error of each coefficient for each model 
prior 
the name of the prior that created the BMA object 
alpha 
value of hyperparameter in coefficient prior used to create the BMA object. 
modelprior 
the prior distribution on models that created the BMA object 
Y 
response 
X 
matrix of predictors 
mean.x 
vector of means for each column of X (used in

The function summary.bas
, is used to print a summary of the
results. The function plot.bas
is used to plot posterior
distributions for the coefficients and image.bas
provides an
image of the distribution over models. Posterior summaries of coefficients
can be extracted using coefficients.bas
. Fitted values and
predictions can be obtained using the S3 functions fitted.bas
and predict.bas
. BAS objects may be updated to use a
different prior (without rerunning the sampler) using the function
update.bas
. For MCMC sampling diagnostics
can be used
to assess whether the MCMC has run long enough so that the posterior probabilities
are stable. For more details see the associated demos and vignette.
Merlise Clyde ([email protected]) and Michael Littman
Clyde, M. Ghosh, J. and Littman, M. (2010) Bayesian Adaptive
Sampling for Variable Selection and Model Averaging. Journal of
Computational Graphics and Statistics. 20:80101
http://dx.doi.org/10.1198/jcgs.2010.09049
Clyde, M. and Ghosh. (2012) Finite population estimators in stochastic search variable selection. Biometrika, 99 (4), 981988. http://dx.doi.org/10.1093/biomet/ass040
Clyde, M. and George, E. I. (2004) Model Uncertainty. Statist. Sci., 19,
8194.
http://dx.doi.org/10.1214/088342304000000035
Clyde, M. (1999) Bayesian Model Averaging and Model Search Strategies (with discussion). In Bayesian Statistics 6. J.M. Bernardo, A.P. Dawid, J.O. Berger, and A.F.M. Smith eds. Oxford University Press, pages 157185.
Hoeting, J. A., Madigan, D., Raftery, A. E. and Volinsky, C. T. (1999)
Bayesian model averaging: a tutorial (with discussion). Statist. Sci., 14,
382401.
http://www.stat.washington.edu/www/research/online/hoeting1999.pdf
Liang, F., Paulo, R., Molina, G., Clyde, M. and Berger, J.O. (2008) Mixtures
of gpriors for Bayesian Variable Selection. Journal of the American
Statistical Association. 103:410423.
http://dx.doi.org/10.1198/016214507000001337
Zellner, A. (1986) On assessing prior distributions and Bayesian regression analysis with gprior distributions. In Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, pp. 233243. NorthHolland/Elsevier.
Zellner, A. and Siow, A. (1980) Posterior odds ratios for selected regression hypotheses. In Bayesian Statistics: Proceedings of the First International Meeting held in Valencia (Spain), pp. 585603.
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., \& Iverson, G. (2009). Bayesian ttests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16, 225237
Rouder, J. N., Morey, R. D., Speckman, P. L., Province, J. M., (2012) Default Bayes Factors for ANOVA Designs. Journal of Mathematical Psychology. 56. p. 356374.
summary.bas
, coefficients.bas
,
print.bas
, predict.bas
, fitted.bas
plot.bas
, image.bas
, eplogprob
,
update.bas
Other bas methods: BAS
,
coef.bas
, confint.coef.bas
,
confint.pred.bas
,
diagnostics
, fitted.bas
,
force.heredity.bas
,
image.bas
, predict.basglm
,
predict.bas
, summary.bas
,
update.bas
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30  library(MASS)
data(UScrime)
crime.bic = bas.lm(log(y) ~ log(M) + So + log(Ed) +
log(Po1) + log(Po2) +
log(LF) + log(M.F) + log(Pop) + log(NW) +
log(U1) + log(U2) + log(GDP) + log(Ineq) +
log(Prob) + log(Time),
data=UScrime, n.models=2^15, prior="BIC",
modelprior=beta.binomial(1,1),
initprobs= "eplogp")
# use MCMC rather than enumeration
crime.mcmc = bas.lm(log(y) ~ log(M) + So + log(Ed) +
log(Po1) + log(Po2) +
log(LF) + log(M.F) + log(Pop) + log(NW) +
log(U1) + log(U2) + log(GDP) + log(Ineq) +
log(Prob) + log(Time),
data=UScrime,
method="MCMC",
MCMC.iterations=20000, prior="BIC",
modelprior=beta.binomial(1,1),
initprobs= "eplogp")
summary(crime.bic)
plot(crime.bic)
image(crime.bic, subset=1)
# more complete demo's
demo(BAS.hald)
## Not run: demo(BAS.USCrime)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.