Description Usage Arguments Details Value Examples
The framework of the model are bayesian generalized linear models of the form
E(y) = h(Xβ)
where h is the response function. Further, a prior of the form
β ~ N(m, M) is specified.
Available distributions and associated link functions are:
"bernoulli" - logit link
"normal" - identity link
"poisson" - log link
The function executes the Metropolis-Hashtings algorithm, where a Normal proposal density based on prior information and the current beta is generated (and updated) by means of iteratively weighted least squares. A random sample from the proposal density is then evaluated according to the loglikelihood, log-prior distribution and logarithmized conditional proposal density compared to the previous beta. If the new value performs good enough, it has a higher chance to be stored in the chain of betas. If rejected, the current beta is stored in the chain and the algorithm proceeds to the next iteration.
1 2 3 4 |
formula |
object of the type formula or a one that can be coerced into the class formula. |
data |
|
dist |
character element either "bernoulli", "normal" or "poisson". |
beta_start |
vector of appropriate length to specify the starting values for the regression parameter β in the Metropolis-Hastings algorithm. |
a0 |
scalar number, starting parameter "shape" for the iverse gamma (IG) distribution of the Gibbs sampler for the variance. |
b0 |
scalar number, starting parameter "rate" for the IG distribution. |
number_it |
number n of iterations for the Metropolis-Hastings algorithm. |
m |
numeric vector or single number of the same length as beta. |
M |
a numeric symetric noningular square Matrix with same size as the amount of regression parameters of interest. |
thinning_lag |
integer from >0 |
burnin |
integer that indicates how many samples will be cut off at the beginnign of the chain, the default is 500. |
notify |
If |
dist
specified by the user as "bernoulli", "normal" or
"poisson" according to the assumptions about the response variable.
If dist = "normal"
, a hybrid algorithm is executed, where samples for
σ^2 are obtained with a Gibbs sampler updating the full
conditionals of a conjugate inverse gamma prior.
By default, the beta_start
are set to the maximum likelihood estimator
for the regression model as estimated by glm()
.
The starting values a0, b0
are only needed if dist = "normal"
,
as the variance cannot be expressed by means of the expected value, as it is
with the other two distributions, that have only one dependent parameter.
Note that it might be necessary to increase the amount of iterations fixed
by number_it
, if a larger burn in phase is selected or
thinning_lag
is specified in order to reach a suficcient amount of
samples from the posterior. The remaining amount of samples are calculated
via
n = (number_it - burnin) / (thinning_lag)
.
The prior distribution of β has expectation m
with
beta_start
as default and covariance matrix M
. The default is
the identity Matrix.
thinning_lag
specifies at what intervall the obtained chain will be
thinned out. If thinning_lag = 1
every element of the chain will be
returned. If thinning_lag = 10
obly every 10th
element will be
returned, and so on.
A list of class frequentistkiller
with
the following elements:
chain
: A vector (if univariate without intercept) or dataframe
with the sampled values for each element of β in the respective
column.
The values of all arguments of the function call as specified by the user
The information can be displayed via summary()
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ## Not run:
## Normal distribution
# all function arguments explicit
data("PlantGrowth")
mh1 <- frequentistkiller(weight ~ group, data = PlantGrowth, dist = "normal",
number_it = 10000, beta_start = c(3, 0, 0),
m = c(1, 0, 0), a0 = 0.001, b0 = 0.001,
M = 10*diag(c(1,1,1)), thinning_lag = 10,
burnin = 100)
summary(mh1)
matplot(mh1$chain, type = "l", col = seq(1,4), ylim = c(-1, 6),
main = "Traceplot")
legend("center", legend = colnames(mh1$chain), col = seq(1,4),lty = 1)
## poisson distributed data
# default parameters
set.seed(42)
n <- 100
x <- rnorm(n)
beta <- c(4, 1.2)
lambda <- exp(beta[1] + x*beta[2])
(y <- rpois(n, lambda))
mh2 <- frequentistkiller(y ~ x, dist = "poisson", number_it = 2000)
summary(mh2)
## Bernoulli distributed data
n <- 100
a <- runif(n)
df3 <- data.frame(y = rbinom(n, 1, exp(eta)/(1 + exp(eta))), x = a)
mh3 <- frequentistkiller(y ~ x, df3, dist = "bernoulli", number_it = 1000,
burnin = 0)
summary(mh3)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.