Description Usage Arguments Value References Examples
This is the extended Bayesian LASSO presented by
Crispin M. Mutshinda and Mikko J. Sillanpää (2010) which is an improvement on the Baysian LASSO
of Park & Casella (2008). Two variants are provided here.
The first version of the model is the original specification in Mutshinda and Sillanpää (2010), labeled
"classic" in the options here. This requires you to choose upper limits for the uniform priors on both
the top-level shrinkage hyperparameter as well as the local shrinkage parameters. These can be tuned through
model comparison if neccessary.
Model Specification:
The second version is the "gamma" prior. This places a gamma(0.50 , 0.20) prior on the
top level shrinkage hyperparameter. The individual shrinkage parameters are still given independent uniform(0, local_u)
priors just as in the classic version.
Model Specification:
The author of the extended Bayesian Lasso (Sillanpää, personal communication) confirmed that gamma prior on the top
level shrinkage parameter
does work well in some settings, which is why I opted to include the "gamma" variant.
One really nice feature of the extended Bayesian Lasso is that inclusion probabilities and Bayes Factors for the
coefficients are easily obtained. The prior inclusion probability is given by 1/local_u, so for example
uniform(0, 2) priors on the shrinkage parameters indicate a 50
probability. Common in Bayesian variable selection is to use a 20
dealing with a high dimensional problem, so for this choose local_u = 5. If you have genuine prior
information you can and should use this to guide your choice. If you are unsure, use model comparison
to select which value of u to choose. Inclusion indicators are given by a step function based on the
marginal individual shrinkage parameter, delta = step(1 - eta). Inlcusion probabilities are then given as the number of
1s that appear in the vector of monte carlo samples out of the total number of iterations. This will appear as
the mean for each inclusion indicator in the summary.
Bayes Factors for each cofficient can then be manually derived using the formula below (Mutshinda, & Sillanpää, 2010; 2012).
1 2 3 4 | extLASSODC(formula, design.formula, data, family = "gaussian",
eta_prior = "classic", local_u = 2, top_u = 25, log_lik = FALSE,
iter = 10000, warmup = 10000, adapt = 15000, chains = 4,
thin = 3, method = "parallel", cl = makeCluster(2), ...)
|
formula |
the model formula |
design.formula |
formula for the design covariates. |
data |
a data frame. |
family |
one of "gaussian", "binomial", or "poisson". |
eta_prior |
one of "classic (default)", or "gamma". |
local_u |
This must be assigned a value. Default is 2. |
top_u |
If using eta_prior = "classic" this must be assigned a value. Default is 25. |
log_lik |
Should the log likelihood be monitored? The default is FALSE. |
iter |
How many post-warmup samples? Defaults to 10000. |
warmup |
How many warmup samples? Defaults to 10000. |
adapt |
How many adaptation steps? Defaults to 15000. |
chains |
How many chains? Defaults to 4. |
thin |
Thinning interval. Defaults to 3. |
method |
Defaults to "parallel". For an alternative parallel option, choose "rjparallel" or. Otherwise, "rjags" (single core run). |
cl |
Use parallel::makeCluster(# clusters) to specify clusters for the parallel methods. Defaults to two cores. |
... |
Other arguments to run.jags. |
A run.jags object
Li, Z.,and Sillanpää, M. J. (2012) Overview of LASSO-related penalized regression methods for quantitative trait mapping and genomic selection. Theoretical and Applied Genetics 125: 419-435.
Mutshinda, C.M., & Sillanpää, M.J. (2010). Extended Bayesian LASSO for multiple quantitative trait loci mapping and unobserved phenotype prediction. Genetics, 186 3, 1067-75 .
Mutshinda, C. M., and M. J. Sillanpää (2012) A decision rule for quantitative trait locus detection under the extended Bayesian LASSO model. Genetics 192: 1483-1491.
1 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.