fkf.SP: Fast Kalman Filtering using Sequential Processing.

fkf.SPR Documentation

Fast Kalman Filtering using Sequential Processing.

Description

\loadmathjax

The fkf.SP function performs fast and flexible Kalman filtering using sequential processing. It is designed for efficient parameter estimation through maximum likelihood estimation. Sequential processing (SP) is a univariate treatment of a multivariate series of observations that increases computational efficiency over traditional Kalman filtering in the general case. SP takes the additional assumption that the variance of disturbances in the measurement equation are independent. fkf.SP is based from the fkf function of the FKF package but is, in general, a faster Kalman filtering method. fkf and fkf.SP share identical arguments (except for the GGt argument, see Arguments). fkf.SP is compatible with missing observations (i.e. NA's in argument yt).

Usage

fkf.SP(
  a0,
  P0,
  dt,
  ct,
  Tt,
  Zt,
  HHt,
  GGt,
  yt,
  verbose = FALSE,
  smoothing = FALSE
)

Arguments

a0

A vector giving the initial value/estimation of the state variable

P0

A matrix giving the variance of a0

dt

A matrix giving the intercept of the transition equation

ct

A matrix giving the intercept of the measurement equation

Tt

An array giving factor of the transition equation

Zt

An array giving the factor of the measurement equation

HHt

An array giving the variance of the innovations of the transition equation

GGt

A vector giving the diagonal elements of the matrix for the variance of disturbances of the measurement equation. Covariance between disturbances is not supported under the sequential processing method.

yt

A matrix containing the observations. "NA"- values are allowed

verbose

A logical. When verbose = TRUE, A list object is output, which provides filtered values of the Kalman filter (see Value).

smoothing

A logical. When smoothing = TRUE, Kalman smoothing is additionally performed and smoothed values returned (see Value).

Details

Parameters:

The fkf.SP function builds upon the fkf function of the FKF package by adjusting the Kalman filtering algorithm to utilize sequential processing. Sequential processing can result in significant decreases in processing time over the traditional Kalman filter algorithm. Sequential processing has been empirically shown to grow linearly with respect to the dimensions of \mjeqny_ty[t], rather than exponentially as is the case with the traditional Kalman filter algorithm (Aspinall et al., 2022, P104).

The fkf.SP and fkf functions feature highly similar arguments for compatibility purposes; only argument GGt has changed from an array type object to a vector or matrix type object. The fkf.SP function takes the additional assumption over the fkf function that the variance of the disturbances of the measurement equation are independent; a requirement of SP (see below).

Parameters can either be constant or deterministic time-varying. Assume the number of discrete time observations is \mjeqnnn i.e. \mjeqny = y_ty = y_t where \mjeqnt = 1, \cdots, nt = 1, ..., n. Let \mjeqnmm be the dimension of the state variable and \mjeqndd the dimension of the observations. Then, the parameters admit the following classes and dimensions:

dt either a \mjeqnm \times nm * n (time-varying) or a \mjeqnm \times 1m * 1 (constant) matrix.
Tt either a \mjeqnm \times m \times nm * m * n or a \mjeqnm \times m \times 1m * m * 1 array.
HHt either a \mjeqnm \times m \times nm * m * n or a \mjeqnm \times m \times 1m * m * 1 array.
ct either a \mjeqnd \times nd * n or a \mjeqnd \times 1d * 1 matrix.
Zt either a \mjeqnd \times m \times nd * m * n or a \mjeqnd \times m \times 1d * m * 1 array.
GGt either a \mjeqnd \times nd * n (time-varying) or a \mjeqnd \times 1d * 1 matrix.
yt a \mjeqnd \times nd * n matrix.

State Space Form

The following notation follows that of Koopman et al. (1999). The Kalman filter is characterized by the transition and measurement equations:

\mjdeqn\alpha

_t + 1 = d_t + T_t \cdot \alpha_t + H_t \cdot \eta_talpha(t + 1) = d(t) + T(t) alpha(t) + H(t) * eta(t) \mjdeqny_t = c_t + Z_t \cdot \alpha_t + G_t \cdot \epsilon_ty(t) = c(t) + Z(t) alpha(t) + G(t) * epsilon(t)

where \mjeqn\eta_teta(t) and \mjeqn\epsilon_tepsilon(t) are i.i.d. \mjeqnN(0, I_m)N(0, I(m)) and i.i.d. \mjeqnN(0, I_d)N(0, I(d)), respectively, and \mjeqn\alpha_talpha(t) denotes the state vector. The parameters admit the following dimensions:

\mjeqna_t \in R^ma(t) \in R^m \mjeqnd_t \in R^md[t] \in R^m \mjeqn\eta_t \in R^meta(t) \in R^m
\mjeqnT_t \in R^m \times mT[t] \in R^(m * m ) \mjeqnH_t \in R^m \times mH[t] \in R^(m * m)
\mjeqny_t \in R^dy(t) \in R^d \mjeqnc_t \in R^dc(t) \in R^d \mjeqn\epsilon_t \in R^depsilon(t) \in R^d
\mjeqnZ_t \in R^d \times mZ(t) \in R^(d * m) \mjeqnG_t \in R^d \times dG(t) \in R^(d * d)

Note that fkf.SP takes as input HHt and GGt which corresponds to \mjeqnH_t H_t'H(t) * t(H(t) and \mjeqndiag(G_t)^2diag(G(t))^2 respectively.

Sequential Processing Iteration:

Traditional Kalman filtering takes the entire observational vector \mjeqny_ty(t) as the items for analysis. SP is an alternate approach that filters the elements of \mjeqny_ty(t) one at a time. Sequential processing is described in the textbook of Durbin and Koopman (2001) and is described below.

Let \mjeqnpp equal the number of observations at time \mjeqntt (i.e. when considering possible missing observations \mjeqnp \leq dp <= d). The SP iteration involves treating the vector series: \mjeqny_1,\cdots,y_ny_1,...,y_n instead as the scalar series \mjeqny_1,1,\cdots,y_(1,p),y_2,1,\cdots,y_(n,p_n)y_1,1,...,y_(1,p),y_2,1,...,y_(n,p_n ). This univariate treatment of the multivariate series has the advantage that the function of the covariance matrix, \mjeqnF_tF(t), becomes \mjeqn1 \times 11 * 1, avoiding the calculation of both the inverse and determinant of a \mjeqnp \times pp * p matrix. This can increase computational efficiency (especially under the case of many observations, i.e. \mjeqnpp is large)

For any time point, the observation vector is given by:

\mjdeqn

y_t'=(y_(t,1),\cdots,y_(t,p) )y_t'=(y_(t,1),...,y_(t,p) )

The filtering equations are written as:

\mjdeqn

a_t,i+1 = a_t,i + K_t,i v_t,ia(t,i+1) = a(t,i) + K(t,i) v(t,i) \mjdeqnP_t,i+1 = P_t,i - K_t,i F_t,i K_t,i'P(t,i+1) = P(t,i) - K(t,i) F(t,i) K(t,i)' Where: \mjdeqn\hat y_t,i = c_t + Z_t \cdot a_t,i^y(t) = c(t) + Z(t) * a(t,i) \mjdeqnv_t,i=y_t,i-\hat y_t,iv(t,i)=y(t,i)- y_hat(t,i) \mjdeqnF_t,i = Z_t,i P_t,i Z_t,i'+ GGt_t,iF(t,i) = Z(t,i) P(t,i) Z(t,i)' + Gt(t,i) \mjdeqnK_t,i = P_t,i Z_t,i' F_t,i^-1K(t,i) = P(t,i) Z(t,i)' F(t,i)^-1 \mjdeqni = 1, \cdots, pi = 1, ..., p

Transition from time \mjeqntt to \mjeqnt+1t+1 occurs through the standard transition equations.

\mjdeqn\alpha

_t + 1,1 = d_t + T_t \cdot \alpha_t,palpha(t + 1,1) = d(t) + T(t) alpha(t,p)

\mjdeqn

P_t + 1,1 = T_t \cdot P_t,p \cdot T_t' + HHtP(t + 1,1) = T(t) P(t,p) T(t)' + H(t)

The log-likelihood at time \mjeqntt is given by:

\mjdeqn

log L_t = -\fracp2log(2\pi) - \frac12\sum_i=1^p(log F_i + \fracv_i^2F_i)log L_t = -p/2 * log(2 * pi) - 1/2 * sum_(i=1)^p (log F_i + (v_i^2)/F_i)

Where the log-likelihood of observations is:

\mjdeqn

log L = \sum_t^nlog L_tlog L = \sum_t^n(log L(t))

Compiled Code:

fkf.SP wraps the C-functions fkf_SP, fkf_SP_verbose and fkfs_SP, which each rely upon the linear algebra subroutines of BLAS (Basic Linear Algebra Subprograms). These C-functions are called when verbose = FALSE, verbose = TRUE and smoothing = TRUE, respectively.

The difference in these compiled functions are in the values returned from them. The fkfs_SP also performs Kalman filtering and subsequently smoothing within the singular compiled C-code function.

Value

A numeric value corresponding to the log-likelihood calculated by the Kalman filter. Ideal for maximum likelihood estimation through optimization routines such as optim.

When verbose = TRUE, an S3 class of type 'fkf.SP' with the following elements is also returned, corresponding to the filtered state variables and covariances of the Kalman filter algorithm:

att A m * n-matrix containing the filtered state variables, i.e. att[,t] = a(t|t).
at A m * (n + 1)-matrix containing the predicted state variables, i.e. at[,t] = a(t).
Ptt A m * m * n-array containing the variance of att, i.e. Ptt[,,t] = P(t|t).
Pt A m * m * (n+1)-array containing the variance of at, i.e. Pt[,,t] = P(t).
yt A d * n -matrix containing the input observations.
Tt either a m * m * n or a m * m * 1-array, depending on the argument provided.
Zt either a d * m * n or a d * m * 1-array, depending on the argument provided.
Ftinv A d * n -matrix containing the scalar inverse of the prediction error variances.
vt A d * n -matrix containing the observation error.
Kt A m * d * n -array containing the Kalman gain of state variables at each observation.
logLik The log-likelihood.

In addition to the elements above, the following elements corresponding to the smoothed values output from Kalman smoothing are also returned when smoothing = TRUE. The fks.SP provides more detail regarding Kalman smoothing.

ahatt A m * n-matrix containing the smoothed state variables, i.e. ahatt[,t] = \mjeqna_t|na(t|n)
Vt A m * m * n-array containing the variances of ahatt, i.e. Vt[,,t] = \mjeqnP_t|nP(t|n)

Log-Likelihood Values:

When there are no missing observations (i.e. "NA" values) in argument yt, the return of function fkf.SP and the logLik object returned within the list of function fkf are identical. When NA's are present, however, log-likelihood values returned by fkf.SP are always higher. This is due to low bias in the log-likelihood values output by fkf, but does not influence parameter estimation. Further details are available within this package's vignette.

References

Aspinall, T. W., Harris, G., Gepp, A., Kelly, S., Southam, C., and Vanstone, B. (2022). The Estimation of Commodity Pricing Models with Applications in Capital Investments. Available Online.

Anderson, B. D. O. and Moore. J. B. (1979). Optimal Filtering Englewood Cliffs: Prentice-Hall.

Fahrmeir, L. and tutz, G. (1994) Multivariate Statistical Modelling Based on Generalized Linear Models. Berlin: Springer.

Koopman, S. J., Shephard, N., Doornik, J. A. (1999). Statistical algorithms for models in state space using SsfPack 2.2. Econometrics Journal, Royal Economic Society, vol. 2(1), pages 107-160.

Durbin, James, and Siem Jan Koopman (2001). Time series analysis by state space methods. Oxford university press.

David Luethi, Philipp Erb and Simon Otziger (2018). FKF: Fast Kalman Filter. R package version 0.2.3. 'https://CRAN.R-project.org/package=FKF

Examples


## <-------------------------------------------------------------------------------
##Example 1 - Filter a state space model - Nile data
## <-------------------------------------------------------------------------------

# Observations must be a matrix:
yt <- rbind(datasets::Nile)

## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- yt[1]   # Estimation of the first year flow
P0 <- matrix(100)       # Variance of 'a0'
## These can be estimated through MLE:
GGt <- matrix(15000)
HHt <- matrix(1300)

# 'verbose' returns the filtered values:
output <- fkf.SP(a0 = a0, P0 = P0, dt = dt, ct = ct,
               Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt,
               yt = yt, verbose = TRUE)


## <-------------------------------------------------------------------------------
##Example 2 - ARMA(2,1) model estimation.
## <-------------------------------------------------------------------------------

#Length of series
n <- 1000

#AR parameters
AR <- c(ar1 = 0.6, ar2 = 0.2, ma1 = -0.2, sigma = sqrt(0.2))

## Sample from an ARMA(2, 1) process
a <- stats::arima.sim(model = list(ar = AR[c("ar1", "ar2")], ma = AR["ma1"]), n = n,
innov = rnorm(n) * AR["sigma"])

##State space representation of the four ARMA parameters
arma21ss <- function(ar1, ar2, ma1, sigma) {
Tt <- matrix(c(ar1, ar2, 1, 0), ncol = 2)
Zt <- matrix(c(1, 0), ncol = 2)
ct <- matrix(0)
dt <- matrix(0, nrow = 2)
GGt <- matrix(0)
H <- matrix(c(1, ma1), nrow = 2) * sigma
HHt <- H %*% t(H)
a0 <- c(0, 0)
P0 <- matrix(1e6, nrow = 2, ncol = 2)
return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt = GGt,
            HHt = HHt))
            }

## The objective function passed to 'optim'
objective <- function(theta, yt) {
sp <- arma21ss(theta["ar1"], theta["ar2"], theta["ma1"], theta["sigma"])
 ans <- fkf.SP(a0 = sp$a0, P0 = sp$P0, dt = sp$dt, ct = sp$ct, Tt = sp$Tt,
               Zt = sp$Zt, HHt = sp$HHt, GGt = sp$GGt, yt = yt)
 return(-ans)
}

## Parameter estimation - maximum likelihood estimation:
theta <- c(ar = c(0, 0), ma1 = 0, sigma = 1)
ARMA_MLE <- optim(theta, objective, yt = rbind(a), hessian = TRUE)


## <-------------------------------------------------------------------------------
#Example 3 - Nile Model Estimation:
## <-------------------------------------------------------------------------------

#Nile's annual flow:
yt <- rbind(Nile)

##Incomplete Nile Data - two NA's are present:
yt[c(3, 10)] <- NA

## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- yt[1]   # Estimation of the first year flow
P0 <- matrix(100)       # Variance of 'a0'

## Parameter estimation - maximum likelihood estimation:
##Unknown parameters initial estimates:
GGt <- HHt <- var(c(yt), na.rm = TRUE) * .5
#Perform maximum likelihood estimation
Nile_MLE <- optim(c(HHt = HHt, GGt = GGt),
                fn = function(par, ...)
                -fkf.SP(HHt = matrix(par[1]), GGt = matrix(par[2]), ...),
                yt = yt, a0 = a0, P0 = P0, dt = dt, ct = ct,
                Zt = Zt, Tt = Tt)
## <-------------------------------------------------------------------------------
#Example 4 - Dimensionless Treering Example:
## <-------------------------------------------------------------------------------


## tree-ring widths in dimensionless units
y <- treering

## Set constant parameters:
dt <- ct <- matrix(0)
Zt <- Tt <- matrix(1)
a0 <- y[1]            # Estimation of the first width
P0 <- matrix(100)     # Variance of 'a0'

## Parameter estimation - maximum likelihood estimation:
Treering_MLE <- optim(c(HHt = var(y, na.rm = TRUE) * .5,
                 GGt = var(y, na.rm = TRUE) * .5),
                 fn = function(par, ...)
                -fkf.SP(HHt = array(par[1],c(1,1,1)), GGt = matrix(par[2]), ...),
                 yt = rbind(y), a0 = a0, P0 = P0, dt = dt, ct = ct,
                 Zt = Zt, Tt = Tt)


FKF.SP documentation built on Oct. 10, 2022, 9:06 a.m.