Description Usage Arguments Details Value Note Author(s) References See Also Examples
Performs the Bayesian estimation of the GARCH(1,1) model with Student-t innovations.
1 2 3 | bayesGARCH(y, mu.alpha = c(0,0), Sigma.alpha = 1000 * diag(1,2),
mu.beta = 0, Sigma.beta = 1000,
lambda = 0.01, delta = 2, control = list())
|
y |
vector of observations of size T. |
mu.alpha |
hyper-parameter mu_alpha (prior mean) for the truncated Normal prior on parameter alpha:=(alpha0 alpha1)'. Default: a 2x1 vector of zeros. |
Sigma.alpha |
hyper-parameter Sigma_alpha (prior covariance matrix) for the truncated Normal prior on parameter alpha. Default: a 2x2 diagonal matrix whose variances are set to 1'000, i.e., a diffuse prior. Note that the matrix must be symmetric positive definite. |
mu.beta |
hyper-parameter mu_beta (prior mean) for the truncated Normal prior on parameter beta. Default: zero. |
Sigma.beta |
hyper-parameter Sigma_beta>0 (prior variance) for the truncated Normal prior on parameter beta. Default: 1'000, i.e., a diffuse prior. |
lambda |
hyper-parameter lambda>0 for the translated Exponential distribution on parameter nu. Default: 0.01. |
delta |
hyper-parameter delta>=2 for the translated Exponential distribution on parameter nu. Default: 2 (to ensure the existence of the conditional variance). |
control |
list of control parameters (See *Details*). |
The function bayesGARCH
performs the Bayesian estimation of the
GARCH(1,1) model with Student-t innovations. The underlying algorithm is based on Nakatsuma
(1998, 2000) for generating the parameters of the GARCH(1,1) scedastic
function alpha:=(alpha0 alpha1)' and beta and on
Geweke (1993) and Deschamps (2006) for the generating the degrees of freedom
parameter nu. Further details and examples can be found in Ardia (2008) and
Ardia and Hoogerheide (2010). Finally, we refer to
Ardia (2009) for an extension of the algorithm to Markov-switching GARCH models.
The control
argument is a list that can supply any of
the following components:
n.chain
number of MCMC chain(s) to be
generated. Default: n.chain=1
.
l.chain
length of each MCMC chain. Default: l.chain=10000
.
start.val
vector of starting values of
chain(s). Default: start.val=c(0.01,0.1,0.7,20)
. A matrix of
size nx4
containing starting values in rows can also be provided. This will generate n chains starting at the
different row values.
addPriorConditions
function which allows the user to add constraints on the model parameters.
Default: NULL
, i.e. not additional constraints are imposed (see below).
refresh
frequency of reports. Default: refresh=10
iterations.
digits
number of printed digits in the
reports. Default: digits=4
.
A list of class mcmc.list
(R package coda).
By using bayesGARCH
you agree to the following rules:
You must cite Ardia and Hoogerheide (2010) in working papers and published papers that use bayesGARCH
. Use citation("bayesGARCH")
.
You must place the following URL in a footnote to help others find bayesGARCH
: https://CRAN.R-project.org/package=bayesGARCH.
You assume all risk for the use of bayesGARCH
.
The GARCH(1,1) model with Student-t innovations may be written as follows:
y(t) = e(t)*(varrho * h(t))^(1/2)
for t=1,...,T, where the conditional variance equation is defined as:
h(t) := alpha0 + alpha1 * y(t-1)^2 + beta * h(t-1)
where alpha0>0,alpha1,beta>=0 to ensure a positive conditional variance. We set the initial variance to h(0):=0 for convenience. The parameter varrho:=(nu-2)/nu is a scaling factor which ensures the conditional variance of y(t) to be h(t). Finally, e(t) follows a Student-t distribution with nu degrees of freedom.
The prior distributions on alpha is a bivariate truncated Normal distribution:
p(alpha) prop N2(alpha | mu_alpha, Sigma_alpha) I[alpha>0]
where mu_alpha is the prior mean vector, Sigma_alpha is the prior covariance matrix and I[alpha>0] is the indicator function.
The prior distribution on beta is a univariate truncated Normal distribution:
p(theta) prop N(beta | mu_beta, Sigma_beta) I[beta>0]
where mu_beta is the prior mean and Sigma_beta is the prior variance.
The prior distribution on nu is a translated Exponential distribution:
p(nu) = lambda * exp(-lambda(nu-delta)) I[nu>delta]
where lambda>0 and delta>=2. The prior mean for nu is delta + 1/lambda.
The joint prior on parameter psi:=(alpha,beta,nu) is obtained by assuming prior independence:
p(psi) = p(alpha) * p(beta) * p(nu).
The default hyperparameters mu_alpha, Sigma_alpha, mu_beta, Sigma_beta and lambda define a rather vague prior. The hyper-parameter delta>=2 ensures the existence of the conditional variance. The kth conditional moment for e(t) is guaranteed by setting delta>=k.
The Bayesian estimation of the GARCH(1,1) model with Normal
innovations is obtained as a special case by setting lambda=100
and delta=500
. In this case, the generated values for
nu are centered around 500 which ensure approximate Normality
for the innovations.
The function addPriorConditions
allows to add prior conditions on the model
parameters psi:=(alpha0 alpha1 beta nu)'. The
function must return TRUE
if the constraint holds and
FALSE
otherwise.
By default, the function is:
1 2 3 4 5 |
and therefore does not add any other constraint than the positivity of the parameters which are obtained through the prior distribution for ψ.
You simply need to modify addPriorConditions
in order to add
constraints on the model parameters ψ. For instance, to impose the
covariance-stationary conditions to hold,
i.e. α_1 + β < 1, just define
the function addPriorConditions
as follows:
1 2 3 4 5 | addPriorConditions <- function(psi)
{
psi[2] + psi[3] < 1
}
|
Note that adding prior constraints on the model parameters can diminish the acceptance rate and therefore lead to a very inefficient sampler. This would however indicate that the condition is not supported by the data.
The estimation strategy implemented in bayesGARCH
is fully automatic and does not require
any tuning of the MCMC sampler. The generation of the Markov chains is however time
consuming and estimating the model over several datasets on a daily basis can therefore take a significant amount
of time. In this case, the algorithm can be easily parallelized, by running a single chain on several processors.
Also, when the estimation is repeated over updated time series (i.e. time series with more recent
observations), it is wise to start the algorithm using the posterior mean or median of the parameters
obtained at the previous estimation step. The impact of the starting values (burn-in phase) is likely to be
smaller and thus the convergence faster.
Finally, note that as any MH algorithm, the sampler can get stuck to a given value, so that the chain does not move anymore. However, the sampler uses Taylor-made candidate densities that are especially ‘constructed’ at each step, so it is almost impossible for this MCMC sampler to get stuck at a given value for many subsequent draws. In the unlikely case that such ill behavior would occur, one could scale the data (to have standard deviation 1), or run the algorithm with different initial values or a different random seed.
David Ardia david.ardia.ch@gmail.com
Ardia, D. (2009) Bayesian Estimation of a Markov-Switching Threshold Asymmetric GARCH Model with Student-t Innovations. Econometrics Journal 12(1), pp. 105-126. doi: 10.1111/j.1368-423X.2008.00253.x
Ardia, D., Hoogerheide, L.F. (2010) Bayesian Estimation of the GARCH(1,1) Model with Student-t Innovations. R Journal 2(2), pp.41-47. doi: 10.32614/RJ-2010-014
Ardia, D. (2008) Financial Risk Management with Bayesian Estimation of GARCH Models. Lecture Notes in Economics and Mathematical Systems 612. Springer-Verlag, Berlin, Germany. ISBN 978-3-540-78656-6, e-ISBN 978-3-540-78657-3, doi: 10.1007/978-3-540-78657-3
Deschamps, P.J. (2006) A Flexible Prior Distribution for Markov Switching Autoregressions with Student-t Errors. Journal of Econometrics 133, pp.153-190.
Geweke, J.F. (1993) Bayesian Treatment of the Independent Student-t Linear Model. Journal of Applied Econometrics 8, pp.19-40.
Nakatsuma, T. (2000) Bayesian Analysis of ARMA-GARCH Models: A Markov Chain Sampling Approach. Journal of Econometrics 95(1), pp.57-69.
Nakatsuma, T. (1998) A Markov-Chain Sampling Algorithm for GARCH Models. Studies in Nonlinear Dynamics and Econometrics 3(2), pp.107-117.
garchFit
(R package fGarch) for the classical
Maximum Likelihood estimation of GARCH models.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ## !!! INCREASE THE NUMBER OF MCMC ITERATIONS !!!
## LOAD DATA
data(dem2gbp)
y <- dem2gbp[1:750]
## RUN THE SAMPLER (2 chains)
MCMC <- bayesGARCH(y, control = list(n.chain = 2, l.chain = 200))
## MCMC ANALYSIS (using coda)
plot(MCMC)
## FORM THE POSTERIOR SAMPLE
smpl <- formSmpl(MCMC, l.bi = 50)
## POSTERIOR STATISTICS
summary(smpl)
smpl <- as.matrix(smpl)
pairs(smpl)
## GARCH(1,1) WITH NORMAL INNOVATIONS
MCMC <- bayesGARCH(y, lambda = 100, delta = 500,
control = list(n.chain = 2, l.chain = 200))
## GARCH(1,1) WITH NORMAL INNOVATIONS AND
## WITH COVARIANCE STATIONARITY CONDITION
addPriorConditions <- function(psi){psi[2] + psi[3] < 1}
MCMC <- bayesGARCH(y, lambda = 100, delta = 500,
control = list(n.chain = 2, l.chain = 200,
addPriorConditions = addPriorConditions))
|
chain: 1 iteration: 10 parameters: 0.0473 0.1646 0.668 77.1187
chain: 1 iteration: 20 parameters: 0.0452 0.2181 0.6324 50.9228
chain: 1 iteration: 30 parameters: 0.0342 0.2186 0.6677 46.393
chain: 1 iteration: 40 parameters: 0.0271 0.2128 0.7251 32.807
chain: 1 iteration: 50 parameters: 0.0302 0.1569 0.7286 24.3119
chain: 1 iteration: 60 parameters: 0.0267 0.1763 0.7486 18.3085
chain: 1 iteration: 70 parameters: 0.0252 0.2114 0.7151 15.2974
chain: 1 iteration: 80 parameters: 0.0271 0.2127 0.7061 14.1358
chain: 1 iteration: 90 parameters: 0.0271 0.1809 0.7395 13.2707
chain: 1 iteration: 100 parameters: 0.0343 0.1959 0.6613 14.7607
chain: 1 iteration: 110 parameters: 0.0511 0.1752 0.6677 18.9667
chain: 1 iteration: 120 parameters: 0.0394 0.2331 0.6639 17.9445
chain: 1 iteration: 130 parameters: 0.0292 0.1938 0.712 11.2606
chain: 1 iteration: 140 parameters: 0.0255 0.1431 0.7562 11.3813
chain: 1 iteration: 150 parameters: 0.0283 0.1485 0.7359 11.6727
chain: 1 iteration: 160 parameters: 0.0216 0.1696 0.77 7.9163
chain: 1 iteration: 170 parameters: 0.0279 0.1705 0.7194 8.6845
chain: 1 iteration: 180 parameters: 0.0378 0.1699 0.6625 7.627
chain: 1 iteration: 190 parameters: 0.0403 0.3083 0.6039 7.1882
chain: 1 iteration: 200 parameters: 0.0442 0.2481 0.6187 6.409
chain: 2 iteration: 10 parameters: 0.0339 0.2584 0.6653 96.6218
chain: 2 iteration: 20 parameters: 0.0391 0.2504 0.6328 72.3617
chain: 2 iteration: 30 parameters: 0.0412 0.2245 0.6537 59.483
chain: 2 iteration: 40 parameters: 0.0415 0.2462 0.6444 54.6463
chain: 2 iteration: 50 parameters: 0.0346 0.2415 0.6749 34.685
chain: 2 iteration: 60 parameters: 0.0285 0.2212 0.7076 46.413
chain: 2 iteration: 70 parameters: 0.0332 0.212 0.7045 33.2349
chain: 2 iteration: 80 parameters: 0.038 0.1717 0.6982 34.5699
chain: 2 iteration: 90 parameters: 0.0299 0.1593 0.7374 36.961
chain: 2 iteration: 100 parameters: 0.0279 0.2314 0.6875 38.0709
chain: 2 iteration: 110 parameters: 0.0443 0.2371 0.6127 30.4726
chain: 2 iteration: 120 parameters: 0.054 0.246 0.5826 25.4455
chain: 2 iteration: 130 parameters: 0.0509 0.2081 0.6493 20.2063
chain: 2 iteration: 140 parameters: 0.0459 0.2269 0.6169 30.8129
chain: 2 iteration: 150 parameters: 0.0412 0.2313 0.6341 27.2102
chain: 2 iteration: 160 parameters: 0.0424 0.2355 0.6543 26.4526
chain: 2 iteration: 170 parameters: 0.0294 0.2714 0.6836 13.8314
chain: 2 iteration: 180 parameters: 0.0287 0.1452 0.7525 11.4426
chain: 2 iteration: 190 parameters: 0.0236 0.1247 0.7901 10.2146
chain: 2 iteration: 200 parameters: 0.0224 0.1446 0.7598 10.755
n.chain: 2
l.chain: 200
l.bi: 50
batch.size: 1
smpl size: 300
Iterations = 1:300
Thinning interval = 1
Number of chains = 1
Sample size per chain = 300
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
alpha0 0.0335 0.009424 0.0005441 0.002662
alpha1 0.2031 0.042057 0.0024281 0.009744
beta 0.6932 0.056524 0.0032634 0.018136
nu 19.8380 10.613832 0.6127899 4.885155
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
alpha0 0.01767 0.02665 0.03269 0.03996 0.05354
alpha1 0.13858 0.17162 0.19762 0.23013 0.29714
beta 0.58734 0.64866 0.69819 0.73621 0.78968
nu 6.92325 11.28248 16.00333 27.33288 43.30813
chain: 1 iteration: 10 parameters: 0.0355 0.2077 0.6891 500.0032
chain: 1 iteration: 20 parameters: 0.0387 0.2114 0.669 500.0888
chain: 1 iteration: 30 parameters: 0.0507 0.2581 0.5966 500.0272
chain: 1 iteration: 40 parameters: 0.0806 0.2402 0.4823 500.0264
chain: 1 iteration: 50 parameters: 0.0705 0.2742 0.5368 500.0024
chain: 1 iteration: 60 parameters: 0.0639 0.2045 0.5896 500.0072
chain: 1 iteration: 70 parameters: 0.0394 0.2132 0.6762 500.0097
chain: 1 iteration: 80 parameters: 0.0622 0.2643 0.5421 500.0102
chain: 1 iteration: 90 parameters: 0.0659 0.2293 0.5487 500.0048
chain: 1 iteration: 100 parameters: 0.0476 0.2303 0.6407 500.0047
chain: 1 iteration: 110 parameters: 0.0637 0.2082 0.5981 500.0041
chain: 1 iteration: 120 parameters: 0.049 0.2093 0.6264 500.0355
chain: 1 iteration: 130 parameters: 0.044 0.2744 0.6372 500.0039
chain: 1 iteration: 140 parameters: 0.0438 0.1699 0.7013 500.0087
chain: 1 iteration: 150 parameters: 0.0348 0.2298 0.6985 500.0098
chain: 1 iteration: 160 parameters: 0.0423 0.1962 0.6877 500.0008
chain: 1 iteration: 170 parameters: 0.0568 0.2556 0.6006 500.0024
chain: 1 iteration: 180 parameters: 0.0505 0.2499 0.6105 500.0073
chain: 1 iteration: 190 parameters: 0.0576 0.2092 0.5892 500.0066
chain: 1 iteration: 200 parameters: 0.044 0.2544 0.6369 500.0003
chain: 2 iteration: 10 parameters: 0.0321 0.2527 0.6922 500.0014
chain: 2 iteration: 20 parameters: 0.038 0.2214 0.6558 500.0024
chain: 2 iteration: 30 parameters: 0.0465 0.1758 0.6608 500.0029
chain: 2 iteration: 40 parameters: 0.0358 0.2436 0.6501 500.0061
chain: 2 iteration: 50 parameters: 0.0406 0.2012 0.6775 500.0004
chain: 2 iteration: 60 parameters: 0.0317 0.1906 0.7216 500.0231
chain: 2 iteration: 70 parameters: 0.0304 0.2327 0.7026 500.0002
chain: 2 iteration: 80 parameters: 0.0411 0.2313 0.6314 500.0117
chain: 2 iteration: 90 parameters: 0.0535 0.1916 0.6232 500.0006
chain: 2 iteration: 100 parameters: 0.0568 0.2045 0.6432 500.0153
chain: 2 iteration: 110 parameters: 0.0644 0.3206 0.5004 500.001
chain: 2 iteration: 120 parameters: 0.0623 0.186 0.6353 500.0052
chain: 2 iteration: 130 parameters: 0.0618 0.2287 0.5527 500.0078
chain: 2 iteration: 140 parameters: 0.0406 0.16 0.7109 500.0067
chain: 2 iteration: 150 parameters: 0.0382 0.1442 0.7131 500.0006
chain: 2 iteration: 160 parameters: 0.0449 0.2145 0.6555 500.0022
chain: 2 iteration: 170 parameters: 0.0376 0.2475 0.6433 500.0084
chain: 2 iteration: 180 parameters: 0.0561 0.1922 0.6331 500.0141
chain: 2 iteration: 190 parameters: 0.0382 0.2231 0.6664 500.0105
chain: 2 iteration: 200 parameters: 0.0244 0.2049 0.7433 500.0016
chain: 1 iteration: 10 parameters: 0.0311 0.1806 0.7297 500.002
chain: 1 iteration: 20 parameters: 0.0325 0.2461 0.6704 500.056
chain: 1 iteration: 30 parameters: 0.0353 0.1912 0.6993 500.0242
chain: 1 iteration: 40 parameters: 0.0312 0.2285 0.6985 500.0133
chain: 1 iteration: 50 parameters: 0.0379 0.2141 0.697 500.0061
chain: 1 iteration: 60 parameters: 0.0383 0.2301 0.6628 500.0026
chain: 1 iteration: 70 parameters: 0.0398 0.171 0.7084 500.0013
chain: 1 iteration: 80 parameters: 0.0319 0.1833 0.713 500.009
chain: 1 iteration: 90 parameters: 0.038 0.2019 0.6734 500.0026
chain: 1 iteration: 100 parameters: 0.0317 0.2058 0.6946 500.0013
chain: 1 iteration: 110 parameters: 0.0502 0.1687 0.6692 500.005
chain: 1 iteration: 120 parameters: 0.0239 0.2251 0.7405 500.0008
chain: 1 iteration: 130 parameters: 0.0201 0.1897 0.7577 500.0066
chain: 1 iteration: 140 parameters: 0.0248 0.1537 0.774 500.0015
chain: 1 iteration: 150 parameters: 0.0279 0.2363 0.703 500.0006
chain: 1 iteration: 160 parameters: 0.0302 0.2034 0.7189 500.0003
chain: 1 iteration: 170 parameters: 0.0358 0.2138 0.6975 500.0027
chain: 1 iteration: 180 parameters: 0.0412 0.2152 0.6578 500.0167
chain: 1 iteration: 190 parameters: 0.0433 0.1965 0.6787 500.0012
chain: 1 iteration: 200 parameters: 0.039 0.219 0.6536 500.0017
chain: 2 iteration: 10 parameters: 0.0272 0.1807 0.7378 500.0139
chain: 2 iteration: 20 parameters: 0.0455 0.1636 0.6786 500.0264
chain: 2 iteration: 30 parameters: 0.0342 0.1638 0.7338 500.0157
chain: 2 iteration: 40 parameters: 0.0386 0.1807 0.7077 500.0027
chain: 2 iteration: 50 parameters: 0.0379 0.22 0.6607 500.0219
chain: 2 iteration: 60 parameters: 0.0526 0.3436 0.5422 500.0285
chain: 2 iteration: 70 parameters: 0.107 0.2086 0.4865 500.0006
chain: 2 iteration: 80 parameters: 0.0783 0.2861 0.4534 500.029
chain: 2 iteration: 90 parameters: 0.0923 0.3137 0.4085 500.0084
chain: 2 iteration: 100 parameters: 0.0836 0.3554 0.4355 500.0082
chain: 2 iteration: 110 parameters: 0.0762 0.293 0.5142 500.0138
chain: 2 iteration: 120 parameters: 0.0742 0.2344 0.5108 500.0153
chain: 2 iteration: 130 parameters: 0.0415 0.2848 0.5945 500.0016
chain: 2 iteration: 140 parameters: 0.0538 0.1666 0.6315 500.0068
chain: 2 iteration: 150 parameters: 0.0494 0.2902 0.565 500
chain: 2 iteration: 160 parameters: 0.0523 0.2231 0.6561 500.0044
chain: 2 iteration: 170 parameters: 0.0423 0.2709 0.6179 500.0089
chain: 2 iteration: 180 parameters: 0.0442 0.2032 0.6636 500.0001
chain: 2 iteration: 190 parameters: 0.0545 0.2587 0.5859 500.0127
chain: 2 iteration: 200 parameters: 0.0504 0.2609 0.5913 500.0031
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.