Description Usage Arguments Details Value References See Also Examples
Posterior distribution, and related quantities, for two non linear regression models each based on a different dataset. For each dataset, two error distributions are considered: normal and Student's t. The functions ending in "lub" refer to the Lubricant
dataset, those with ending "bod2" refer to the BOD2
dataset. The functions with an inner T
refer to the model based on Student's t-error distribution. The functions starting with nlpost
give the negative log-posterior density, those starting with grad
give the gradient of nlpost
and the functions starting with hess
provide the Hessian matrix.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | nlpost_lub(beta, lsig, y, x1, x2, n, muBeta, SigBeta, sigScale)
nlpostT_lub(beta, lsig, lnu, y, x1, x2, n, muBeta, SigBeta, sigScale)
grad_lub(beta, lsig, y, x1, x2, n, muBeta, SigBeta, sigScale)
gradT_lub(beta, lsig, lnu, y, x1, x2, n, muBeta, SigBeta, sigScale)
hess_lub(beta, lsig, y, x1, x2, n, muBeta, SigBeta, sigScale)
hessT_lub(beta, lsig, lnu, y, x1, x2, n, muBeta, SigBeta, sigScale)
nlpost_bod2(beta, lsig, y, Time, n, muBeta, SigBeta, sigScale)
nlpostT_bod2(beta, lsig, lnu, y, Time, n, muBeta, SigBeta, sigScale)
grad_bod2(beta, lsig, y, Time, n, muBeta, SigBeta, sigScale)
gradT_bod2(beta, lsig, lnu, y, Time, n, muBeta, SigBeta, sigScale)
hess_bod2(beta, lsig, y, Time, n, muBeta, SigBeta, sigScale)
hessT_bod2(beta, lsig, lnu, y, Time, n, muBeta, SigBeta, sigScale)
|
beta |
The vector of regression parameters. For the Lubricant dataset beta is a 9-variate vector, for the BOD2 data beta is a bivariate vector |
lsig |
The logarithm of the scale parameter |
y |
The response vector |
x1 |
For Lubricant data only: the temperature of the lubricant |
x2 |
For Lubricant data only: the pressure of the lubricant |
n |
The sample size |
muBeta |
The center of the prior for beta. See Details |
SigBeta |
The scale matrix of the prior for beta |
sigScale |
For the models with Student's t-errors only: the value of the scale of the prior for the scale parameter. |
lnu |
For the models with Student's t-errors only: the logarithm of the degrees of freedom |
Time |
For BOD2 data only: the time |
The prior for beta
is the multivariate Studnet's t with the appropriate dimension, with location muBeta
, scale matrix SigBeta
and degrees of freedom equal to 5. The prior for the scale
is the HalfCauchy distribution with scale sigScale
and the prior for the degress of freedom is the Jeffreys' rule prior of Fonseca et al. (2008).
double
double
10-variate vector
11-variate vector
10 by 10 matrix
11 by 11 matrix
double
double
3-variate vector
4-variate vector
3 by 3 matrix
4 by 4 matrix
Fonseca T. C., Ferreira M. A. R. & Migon H. S. (2008) Objective Bayesian analysis for the Student-t regression model. Biometrika 95, 325–333.
Ruli E., Sartori N. & Ventura L. (2015) Improved Laplace approximation for marignal likelihoods. http://arxiv.org/abs/1502.06440
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | ## Not run:
library(iLaplace)
## regression models with BOD2 data
# set the data
data(BOD2)
y = BOD2$demand
Time = BOD2$Time
n = length(y)
muBeta = rep(0, 2)
SigBeta = matrix(0, 2,2)
diag(SigBeta) = 10
sigScale = 10
dfprop = 3
## Model with normal errors
# the objective function and derivatives
ff = function(pp, ...) nlpost_bod2(beta = pp[1:2], lsig = pp[3], ...)
ff.gr = function(pp, ...) grad_bod2(beta = pp[1:2], lsig = pp[3], ...)
ff.hess = function(pp, ...) hess_bod2(beta = pp[1:2], lsig = pp[3], ...)
# find the mode
init <- c(5, 2, 0.5)
opt.post = nlminb(init, obj = ff, gradient = ff.gr, hessian = ff.hess,
y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale, control = list(trace = 1))
opt.post$hessian = ff.hess(opt.post$par, y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale)
# simulate from the posterior
mcmc.post = MHmcmc(logfun = function(x) -ff(x, y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale),
burnin = 20000, mcmc = 1e+6,
thin = 5, tune = 0.98, V = solve(opt.post$hessian),
df = dfprop, theta.init = init, verbose = 50000)
# marginal likelihood by Chib's method
logM.Chib <- ChibML(logfun = function(x) -ff(x, y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale),
theta.star = opt.post$par,
tune = 1.5, V = solve(opt.post$hessian), t(mcmc.post),
df = dfprop, verbose = 100000)
# marginal likelihood by the improved Laplace
logM.iLap <- iLaplace(fullOpt = opt.post, ff = ff, ff.gr = ff.gr, ff.hess = ff.hess,
control = list(sp.points = 100, delta = 13, n.cores = 3),
clEvalQ = c("iLaplaceExamples"), y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale)
logM.Lap <- 3*log(2*pi)/2 - opt.post$objective - 0.5*determinant(opt.post$hessian)$mod
# marginal likelihood by importance samling
logM.IS <- isML(logfun = function(x) -ff(x, y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale), nsim = 1e+6,
theta.hat = opt.post$par, tune = 1.5,
V = solve(opt.post$hessian),df = dfprop, verbose = 1000)
logM.iLap
logM.Lap
logM.IS
logM.Chib
## Model with Student's t errors
# the objective function and derivatives
tff = function(pp, ...) nlpostT_bod2(beta = pp[1:2], lsig = pp[3], lnu = pp[4], ...)
tff.gr = function(pp, ...) gradT_bod2(beta = pp[1:2], lsig = pp[3], lnu = pp[4], ...)
tff.hess = function(pp, ...) hessT_bod2(beta = pp[1:2], lsig = pp[3], lnu = pp[4], ...)
# find the mode
init <- c(2, 5, 0.5, 0)
opt.postt = nlminb(init, obj = tff, gradient = tff.gr, hessian = tff.hess,
y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale, control = list(trace = 1))
opt.postt$hessian = tff.hess(opt.postt$par, y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale)
# simulate from the posterior
mcmc.postt = MHmcmc(logfun = function(x) -tff(x, y = y, Time = Time, n = n, muBeta = muBeta,
SigBeta = SigBeta, sigScale = sigScale),
burnin = 20000, mcmc = 1e+6,
thin = 5, tune = 1.2, V = solve(opt.postt$hessian),
df = dfprop, theta.init = opt.postt$par, verbose = 50000)
logM.Lapt <- 4*log(2*pi)/2 - opt.postt$objective - 0.5*determinant(opt.postt$hessian)$mod
# marginal likelihood by Chib's method
logM.Chibt <- ChibML(logfun = function(x) -tff(x, y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale),
theta.star = opt.postt$par,
tune = 1.5, V = solve(opt.postt$hessian), t(mcmc.postt),
df = dfprop, verbose = 100000)
logM.iLapt <- iLaplace(fullOpt = opt.postt, ff = tff, ff.gr = tff.gr, ff.hess = tff.hess,
control = list(sp.points = 100, delta = 9, n.cores = 4),
clEvalQ = c("iLaplaceExamples"), y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale)
logM.ISt <- isML(logfun = function(x) -tff(x, y = y, Time = Time, n = n,
muBeta = muBeta, SigBeta = SigBeta,
sigScale = sigScale), nsim = 1e+6,
theta.hat = opt.postt$par, tune = 1.5,
V = solve(opt.postt$hessian),df = dfprop, verbose = 10000)
logM.iLapt
logM.Lapt
logM.ISt
logM.Chibt
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.