Description Usage Arguments Value References Examples
The Group Bayesian Bridge model of Mallick & Yi (2018) adapted to the Shape Adaptive Shrinkage Prior (SASP) of Sillanpää & Mutshinda (2011).
Bridge regression allows you to utilize different Lp norms for the shape of the prior through the shape parameter kappa of
the power exponential distribution (also known as generalized Gaussian). Norms of 1 and 2 give the Laplace and Gaussian
distributions respectively (corresponding to the LASSO and Ridge Regression). Norms smaller than 1 are very difficult to
estimate directly, but have very tall modes at zero and very long, cauchy like tails. Values greater than 2 become increasingly
platykurtic, with the uniform distribution arising as it approaches infinity.
The benefit of the shape adaptive shrinkage prior is that one need not pick a specific norm. Hence, if there is uncertainty over
whether or not one wishes to choose the L1 norm (LASSO) or L2 norm (Ridge), this integrates over a reasonable range of values. The gamma
prior for the norm has an expected value of 1.4, which gives a reasonable compromise between the LASSO and Ridge.
JAGS has no built in power exponential distribution, so the distribution is parameterized as a uniform-gamma mixture just as in Mallick & Yi (2018).
The parameterization is given below. For generalized linear models plug-in pseudovariances are used.
Model Specification:
Plugin Pseudo-Variances:
1 2 3 | groupSasp(X, y, idx, family = "gaussian", log_lik = FALSE,
iter = 10000, warmup = 1000, adapt = 2000, chains = 4,
thin = 1, method = "parallel", cl = makeCluster(2), ...)
|
X |
the model matrix. Construct this manually with model.matrix()[,-1] |
y |
the outcome variable |
idx |
the group labels. Should be of length = to ncol(model.matrix()[,-1]) with the group assignments for each covariate. Please ensure that you start numbering with 1, and not 0. |
family |
one of "gaussian", "binomial", or "poisson". |
log_lik |
Should the log likelihood be monitored? The default is FALSE. |
iter |
How many post-warmup samples? Defaults to 10000. |
warmup |
How many warmup samples? Defaults to 1000. |
adapt |
How many adaptation steps? Defaults to 2000. |
chains |
How many chains? Defaults to 4. |
thin |
Thinning interval. Defaults to 1. |
method |
Defaults to "parallel". For an alternative parallel option, choose "rjparallel" or. Otherwise, "rjags" (single core run). |
cl |
Use parallel::makeCluster(# clusters) to specify clusters for the parallel methods. Defaults to two cores. |
... |
Other arguments to run.jags. |
a runjags object
Kyung, M., Gill, J., Ghosh, M., and Casella, G. (2010). Penalized regression, standard errors, and bayesian lassos. Bayesian Analysis, 5(2):369–411.
Mallick, H. & Yi, N. (2018). Bayesian bridge regression, Journal of Applied Statistics, 45:6, 988-1008, DOI: 10.1080/02664763.2017.1324565
Mallick, H., & Yi, N. (2014). A New Bayesian Lasso. Statistics and its interface, 7(4), 571–582. doi:10.4310/SII.2014.v7.n4.a12
Sillanpää, S., & Mutshinda, C., (2011). Bayesian shrinkage analysis of QTLs under shape-adaptive shrinkage priors, and accurate re-estimation of genetic effects. Heredity volume 107, pages 405–412. doi: 10.1038/hdy.2011.37
1 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.