knitr::opts_chunk$set( comment = "#>", collapse = TRUE, out.width = "70%", fig.align = "center", fig.width = 6, fig.asp = .618 ) orig_opts <- options("digits") options(digits = 3)
library(bvhar)
Looking at VAR and VHAR, you can learn how the models work and how to perform this package.
This package includes some datasets.
Among them, we try CBOE ETF volatility index (etf_vix
).
Since this is just an example, we arbitrarily extract a small number of variables: Gold, crude oil, euro currency, and china ETF.
var_idx <- c("GVZCLS", "OVXCLS", "EVZCLS", "VXFXICLS") etf <- etf_vix |> dplyr::select(dplyr::all_of(var_idx)) etf
For evaluation, split the data.
The last 19
observations will be test set.
divide_ts()
function splits the time series into train-test set.
In the other vignette, we provide how to perform out-of-sample forecasting.
h <- 19 etf_eval <- divide_ts(etf, h) # Try ?divide_ts etf_train <- etf_eval$train # train etf_test <- etf_eval$test # test # dimension--------- m <- ncol(etf)
This package indentifies VAR(p) model by
$$\Y_t = \bc + \B_1 \Y_{t - 1} + \ldots + \B_p +\Y_{t - p} + \E_t$$
where $\E_t \sim N(\mathbf{0}_k, \Sigma_e)$
var_lag <- 5
The package perform VAR(p = r var_lag
) based on
$$Y_0 = X_0 A + Z$$
where
$$ Y_0 = \begin{bmatrix} \by_{p + 1}^T \ \by_{p + 2}^T \ \vdots \ \by_n^T \end{bmatrix}{s \times m} \equiv Y{p + 1} \in \R^{s \times m} $$
by build_y0()
and
$$ X_0 = \left[\begin{array}{c|c|c|c} \by_p^T & \cdots & \by_1^T & 1 \ \by_{p + 1}^T & \cdots & \by_2^T & 1 \ \vdots & \vdots & \cdots & \vdots \ \by_{T - 1}^T & \cdots & \by_{T - p}^T & 1 \end{array}\right]{s \times k} = \begin{bmatrix} Y_p & Y{p - 1} & \cdots & \mathbf{1}_{T - p} \end{bmatrix} \in \R^{s \times k} $$
by build_design()
. Coefficient matrix is the form of
$$ A = \begin{bmatrix} A_1^T \ \vdots \ A_p^T \ \bc^T \end{bmatrix} \in \R^{k \times m} $$
This form also corresponds to the other model.
Use var_lm(y, p)
to model VAR(p).
You can specify type = "none"
to get model without constant term.
(fit_var <- var_lm(etf_train, var_lag))
The package provide S3
object.
# class--------------- class(fit_var) # inheritance--------- is.varlse(fit_var) # names--------------- names(fit_var)
Consider Vector HAR (VHAR) model.
$$\Y_t = \bc + \Phi^{(d)} + \Y_{t - 1} + \Phi^{(w)} \Y_{t - 1}^{(w)} + \Phi^{(m)} \Y_{t - 1}^{(m)} + \E_t$$
where $\Y_t$ is daily RV and
$$\Y_t^{(w)} = \frac{1}{5} \left( \Y_t + \cdots + \Y_{t - 4} \right)$$
is weekly RV
and
$$\Y_t^{(m)} = \frac{1}{22} \left( \Y_t + \cdots + \Y_{t - 21} \right)$$
is monthly RV. This model can be expressed by
$$Y_0 = X_1 \Phi + Z$$
where
$$ \Phi = \begin{bmatrix} \Phi^{(d)T} \ \Phi^{(w)T} \ \Phi^{(m)T} \ \bc^T \end{bmatrix} \in \R^{(3m + 1) \times m} $$
Let $T$ be
$$ \mathbb{C}_0 \defn \begin{bmatrix} 1 & 0 & \cdots & 0 & 0 & \cdots & 0 \ 1 / 5 & 1 / 5 & \cdots & 1 / 5 & 0 & \cdots & 0 \ 1 / 22 & 1 / 22 & \cdots & 1 / 22 & 1 / 22 & \cdots & 1 / 22 \end{bmatrix} \otimes I_m \in \R^{3m \times 22m} $$
and let $\mathbb{C}_{HAR}$ be
$$ \mathbb{C}{HAR} \defn \left[\begin{array}{c|c} T & \mathbf{0}{3m} \ \hline \mathbf{0}_{3m}^T & 1 \end{array}\right] \in \R^{(3m + 1) \times (22m + 1)} $$
Then for $X_0$ in VAR(p),
$$ X_1 = X_0 \mathbb{C}{HAR}^T = \begin{bmatrix} \by{22}^T & \by_{22}^{(w)T} & \by_{22}^{(m)T} & 1 \ \by_{23}^T & \by_{23}^{(w)T} & \by_{23}^{(m)T} & 1 \ \vdots & \vdots & \vdots & \vdots \ \by_{T - 1}^T & \by_{T - 1}^{(w)T} & \by_{T - 1}^{(m)T} & 1 \end{bmatrix} \in \R^{s \times (3m + 1)} $$
This package fits VHAR by scaling VAR(p) using $\mathbb{C}_{HAR}$ (scale_har(m, week = 5, month = 22)
).
Use vhar_lm(y)
to fit VHAR.
You can specify type = "none"
to get model without constant term.
(fit_har <- vhar_lm(etf_train))
# class---------------- class(fit_har) # inheritance---------- is.varlse(fit_har) is.vharlse(fit_har) # complements---------- names(fit_har)
This page provides deprecated two functions examples.
Both bvar_minnesota()
and bvar_flat()
will be integrated into var_bayes()
and removed in the next version.
First specify the prior using set_bvar(sigma, lambda, delta, eps = 1e-04)
.
bvar_lag <- 5 sig <- apply(etf_train, 2, sd) # sigma vector lam <- .2 # lambda delta <- rep(0, m) # delta vector (0 vector since RV stationary) eps <- 1e-04 # very small number (bvar_spec <- set_bvar(sig, lam, delta, eps))
In turn, bvar_minnesota(y, p, bayes_spec, include_mean = TRUE)
fits BVAR(p).
y
: Multivariate time series data. It should be data frame or matrix, which means that every column is numeric. Each column indicates variable, i.e. it sould be wide format.p
: Order of BVARbayes_spec
: Output of set_bvar()
include_mean = TRUE
: By default, you include the constant term in the model.(fit_bvar <- bvar_minnesota(etf_train, bvar_lag, num_iter = 10, bayes_spec = bvar_spec))
It is bvarmn
class. For Bayes computation, it also has other class such as normaliw
and bvharmod
.
# class--------------- class(fit_bvar) # inheritance--------- is.bvarmn(fit_bvar) # names--------------- names(fit_bvar)
Ghosh et al. (2018) provides flat prior for covariance matrix, i.e. non-informative.
Use set_bvar_flat(U)
.
(flat_spec <- set_bvar_flat(U = 5000 * diag(m * bvar_lag + 1))) # c * I
Then bvar_flat(y, p, bayes_spec, include_mean = TRUE)
:
(fit_ghosh <- bvar_flat(etf_train, bvar_lag, num_iter = 10, bayes_spec = flat_spec))
# class--------------- class(fit_ghosh) # inheritance--------- is.bvarflat(fit_ghosh) # names--------------- names(fit_ghosh)
Consider the VAR(22) form of VHAR.
$$ \begin{aligned} \Y_t = \bc & + \left( \Phi^{(d)} + \frac{1}{5} \Phi^{(w)} + \frac{1}{22} \Phi^{(m)} \right) \Y_{t - 1} \ & + \left( \frac{1}{5} \Phi^{(w)} + \frac{1}{22} \Phi^{(m)} \right) \Y_{t - 2} + \cdots \left( \frac{1}{5} \Phi^{(w)} + \frac{1}{22} \Phi^{(m)} \right) \Y_{t - 5} \ & + \frac{1}{22} \Phi^{(m)} \Y_{t - 6} + \cdots + \frac{1}{22} \Phi^{(m)} \Y_{t - 22} \end{aligned} $$
What does Minnesota prior mean in VHAR model?
For more simplicity, write coefficient matrices by $\Phi^{(1)}, \Phi^{(2)}, \Phi^{(3)}$. If we apply the prior in the same way, Minnesota moment becomes
$$ E \left[ (\Phi^{(l)}){ij} \right] = \begin{cases} \delta_i & j = i, \; l = 1 \ 0 & o/w \end{cases} \quad \Var \left[ (\Phi^{(l)}){ij} \right] = \begin{cases} \frac{\lambda^2}{l^2} & j = i \ \nu \frac{\lambda^2}{l^2} \frac{\sigma_i^2}{\sigma_j^2} & o/w \end{cases} $$
We call this VAR-type Minnesota prior or BVHAR-S.
set_bvhar(sigma, lambda, delta, eps = 1e-04)
specifies VAR-type Minnesota prior.
(bvhar_spec_v1 <- set_bvhar(sig, lam, delta, eps))
bvhar_minnesota(y, har = c(5, 22), bayes_spec, include_mean = TRUE)
can fit BVHAR with this prior.
This is the default prior setting.
Similar to above functions, this function will be also integrated into vhar_bayes()
and removed in the next version.
(fit_bvhar_v1 <- bvhar_minnesota(etf_train, num_iter = 10, bayes_spec = bvhar_spec_v1))
This model is bvharmn
class.
# class--------------- class(fit_bvhar_v1) # inheritance--------- is.bvharmn(fit_bvhar_v1) # names--------------- names(fit_bvhar_v1)
Set $\delta_i$ for weekly and monthly coefficient matrices in above Minnesota moments:
$$ E \left[ (\Phi^{(l)})_{ij} \right] = \begin{cases} d_i & j = i, \; l = 1 \ w_i & j = i, \; l = 2 \ m_i & j = i, \; l = 3 \end{cases} $$
i.e. instead of one delta
vector, set three vector
daily
weekly
monthly
This is called VHAR-type Minnesota prior or BVHAR-L.
set_weight_bvhar(sigma, lambda, eps, daily, weekly, monthly)
defines BVHAR-L.
daily <- rep(.1, m) weekly <- rep(.1, m) monthly <- rep(.1, m) (bvhar_spec_v2 <- set_weight_bvhar(sig, lam, eps, daily, weekly, monthly))
bayes_spec
option of bvhar_minnesota()
gets this value, so you can use this prior intuitively.
fit_bvhar_v2 <- bvhar_minnesota( etf_train, num_iter = 10, bayes_spec = bvhar_spec_v2 ) fit_bvhar_v2
options(orig_opts)
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.