glmBayes: Bayesian GLMs

Description Usage Arguments Value Examples

View source: R/glmBayes.R

Description

This model utilizes Student-t priors for the coefficients. It is parameterized as a scale mixture of Gaussians, which works by setting a gamma prior on the precision of the normal with shape and rate parameters equal to the desired degrees of freedom divided by two (the shape parameter can also be multiplied by a variance to give the resulting student-t distribution the appropriately scaled precision) :

By default, the Cauchy distribution is obtained by setting the degrees of freedom to 1. The default prior standard deviation is also 1. While JAGS has a Student-t distribution built in, the samplers in JAGS tend to sample from Student-t distributions very inefficiently at times. By parameterizing the Student-t distribution this way JAGS can take advantage of the normal-gamma conjugacy with Gibbs sampling and sample very quickly and accurately.

The default prior settings assume your data are standardized to mean zero and standard deviation of 1.

For an alternative prior that also performs very well see apcGlm, which when lambda = -1 gives the Zellner-Siow Cauchy g-prior.

The full model structure is given below:




The relationship between the degrees of freedom and the prior precision (inverse of squared prior standard deviation) is represented in the figure below. Essentially, if the precision is sufficiently small (or equivalently, if prior s.d. is large) the amount of shrinkage is essentially none regardless of the degrees of freedom. Increasing the degrees of freedom also leads to greater regularization. However, the pattern of regularization differs for different values, which is not represented in the figure. Supposing the precision is held constant at 1, degrees of freedom < 8 will tend to shrink smaller coefficients to a greater degree, while larger coefficients will be affected less due to the long tails. However, as the degrees of freedom increase the Student-t distribution will take on an increasingly gaussian shape and the tails will be pulled in, giving more uniform shrinkage (at this point it is effectively a ridge regression). If the precision is increased, the contrast between small and large coefficients will tend to be even greater for small degrees of freedom, while higher degrees of freedom approach a highly regularized ridge solution.

Note that the figure below is for conceptual illustrative purposes, and does not correspond to an exact mathematical function.

Usage

1
2
3
4
glmBayes(formula, data, family = "gaussian", s = 1, df = 1,
  log_lik = FALSE, iter = 10000, warmup = 1000, adapt = 2000,
  chains = 4, thin = 1, method = "parallel", cl = makeCluster(2),
  ...)

Arguments

formula

the model formula

data

a data frame

family

one of "gaussian", "st" (Student-t with nu=3), "binomial", or "poisson".

s

The desired prior scale. Defaults to 1. Is automatically squared within the model so select a number here on the standard deviation scale.

df

degrees of freedom for prior.

log_lik

Should the log likelihood be monitored? The default is FALSE.

iter

How many post-warmup samples? Defaults to 10000.

warmup

How many warmup samples? Defaults to 1000.

adapt

How many adaptation steps? Defaults to 2000.

chains

How many chains? Defaults to 4.

thin

Thinning interval. Defaults to 1.

method

Defaults to "parallel". For an alternative parallel option, choose "rjparallel" or. Otherwise, "rjags" (single core run).

cl

Use parallel::makeCluster(# clusters) to specify clusters for the parallel methods. Defaults to two cores.

...

Other arguments to run.jags.

Value

A run.jags object

Examples

1

abnormally-distributed/Bayezilla documentation built on Oct. 31, 2019, 1:57 a.m.