Several recursive filters: the classical Kalman filter, the rLS filter, and the ACM filter

Share:

Description

These functions are (preliminary) interfaces producing recursive filters to a given series of observations from a time-invariant, linear, Gaussian state space model (ti-l-G-SSM)

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
recursiveFilter(Y, a, S, F, Q, Z, V, initSc = .cKinitstep, predSc = .cKpredstep,
                   corrSc = .cKcorrstep,
                   initSr = NULL, predSr = NULL, corrSr = NULL,
                   nsim = 0, seed = NULL, ..., dropRuns = TRUE,
                   CovRunDep = FALSE, saveOpt = TRUE, dimsCheck = NULL)
KalmanFilter(Y, a, S, F, Q, Z, V, dropRuns = TRUE)
rLSFilter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm, dropRuns = TRUE)
rLS.AO.Filter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm, dropRuns = TRUE)
rLS.IO.Filter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm, dropRuns = TRUE)
ACMfilter(Y, a, S, F, Q, Z, V, s0, psi,  apsi, bpsi, cpsi, flag, dropRuns = TRUE)

Arguments

a

mean of the initial state, in matrix form p x runs for p the state dimension and runs the number of paths to be followed in parallel.

S

initial state covariance (see below), in matrix form p x p for p the state dimension.

Z

observation matrices (see below), in array form q x p x t for q and p the observation resp.\ state dimension and t the length of the observation series.

V

observation error covariances (see below), in array form q x q x t for q the observation dimension and t the length of the observation series.

F

innovation transition matrices (see below), in array form p x p x t for p the state dimension and t the length of the observation series.

Q

innovation covariances (see below), in array form p x p x t for p the state dimension and t the length of the observation series.

b

clipping height b for the rLS filter

norm

a function with a numeric vector x as first argument, returning a norm of x - not necessarily, but defaulting to, Euclidean norm; used by rLS filter to determine "too" large corrections

Y

observations y_t, in array form q x runs x t for t the length of the observation series, q the observation dimension and runs the number of paths to be followed in parallel.

s0

scale of nominal Gaussian component of additive noise

psi

influence function to be used (default: Hampel's ψ function, which is the only one available at the moment)

apsi,bpsi,cpsi

tuning constants for Hampel's ψ-function, (default: a=b=2.5, c=5.0)

flag

character, if "weights" (default), use ψ(t)/t to calculate the weights; if "deriv", use ψ'(t)

initSc

a function with first arguments a and S to produce the classical (non robust) initial filter value x_{0|0}

initSr

either NULL or a function with first arguments a and S to produce a robust initial filter value x_{0|0}

predSc

a function with first arguments x0=x_{t-1|t-1} and S0=S_{t-1|t-1}, F, and Q to produce the classical (non robust) prediction value x_{t|t-1}

predSr

either NULL or a function with first arguments x0=x_{t-1|t-1} and S0=S_{t-1|t-1}, F, and Q to produce a robust prediction value x_{t|t-1}

corrSc

a function with first arguments y=y_t, x1=x_{t|t-1} and S1=S_{t|t-1}, Z, and V to produce the classical (non robust) correction value x_{t|t}

corrSr

either NULL or a function with first arguments y=y_t, x1=x_{t|t-1} and S1=S_{t|t-1}, Z, and V to produce a robust correction value x_{t|t}

nsim

integer; if positive, we simulate a bunch of nsim paths (acc. to ideal model) to get emp. covariances

seed

seed for the simulations

dropRuns

logical; shall run-dimension be collapsed if it is one?

CovRunDep

logical; shall covariances be run-dependent?

saveOpt

logical; either single valued or (named) 4-valued: shall we store Kalman gain, Cov(Delta Y), Delta Y, ...?

dimsCheck

logical or NULL; shall we check dimensions?

...

further arguments to the "step"-functions

Details

We work in the setup of the time-invariant, linear, Gaussian state space model (ti-l-G-SSM) with p dimensional states x_t and q dimensional observations y_t, with initial condition

x_0 ~ N_p(a,S),

state equation

x_t = F_t x_{t-1} + v_t, v_t ~ N_p(0,Q_t), t>=1,

observation equation

y_t = Z_t x_t + e_t, e_t ~ N_q(0,V_t), t>=1,

and where all random variable x_0, v_t, e_t are independent.

For notation, let us formulate the classical Kalman filter in this context:

(0) ininitial step

x_{0|0} = a

\code{ } with error covariance

S_{0|0} = Cov(x_0-x_{0|0}) = S

(1) prediction step

x_{t|t-1} = F_t x_{t-1|t-1}, t>=1

\code{ } with error covariance

S_{t|t-1} = Cov(x_t-x_{t|t-1}) = F_t S_{t-1|t-1} F_t' + Q_t

(2) correction step

x_{t|t} = x_{t|t-1} + K_t (y_t - Z_t x_{t|t-1}), t>=1

\code{ } for Kalman Gain

K_t = S_{t|t-1} Z_t' (Z_t S_{t|t-1} Z_t' + V_t )^-

\code{ } with error covariance

S_{t|t} = Cov(x_t-x_{t|t}) = S_{t|t-1} - K_t Z_t S_{t|t-1}

Value

Xf:

the series x_{t|t} filtered by the classical filter — a matrix with dimensions p x (t+1)

Xp:

the series x_{t|t-1} predicted by the classical filter — a matrix with dimensions p x t

Xrf:

if any of the arguments initSr, predSr, corrSr is not NULL: the series x_{t|t} filtered by the robust filter — a matrix with dimensions p x (t+1) else NULL

Xrp:

if any of the arguments initSr, predSr, corrSr is not NULL: the series x_{t|t-1} predicted by the robust filter — a matrix with dimensions p x t else NULL

S0:

the series S_{t|t} of filter error covariances produced by the classical filter — an array with dimensions p x p x (t+1)

S1:

the series S_{t|t-1} of prediction error covariances produced by the classical filter — an array with dimensions p x p x t

KG:

the series K_{t} of Kalman gains produced by the classical filter — an array with dimensions q x p x t

Delta:

the series Delta_t of covariances of Delta y_t produced by the classical filter — an array with dimensions q x q x t

DeltaY:

the series Delta y_t of observation residuals produced by the classical filter — an array with dimensions q x runs x t

Sr0:

if any of the arguments initSr, predSr, corrSr is not NULL: the series S_{t|t} of filter error covariances produced by the robust filter — an array with dimensions p x p x (t+1)

Sr1:

if any of the arguments initSr, predSr, corrSr is not NULL: the series S_{t|t-1} of prediction error covariances produced by the robust filter — an array with dimensions p x p x t

KGr:

if any of the arguments initSr, predSr, corrSr is not NULL: the series K_{t} of Kalman gains produced by the robust filter — an array with dimensions q x p x t

Deltar:

the series Delta_t of covariances of Delta y_t produced by the robust filter — an array with dimensions q x q x t

DeltaYr:

the series Delta y_t of observation residuals produced by the robust filter — an array with dimensions q x runs x t

IndIO:

if predSr is not NULL: the indicator showing when the robust predictor uses clipping — a vector with dimensions t else NULL

IndAO:

if corrSr is not NULL: the indicator showing when the robust filter uses clipping — a vector with dimensions t else NULL

rob0L:

if any of the arguments initSr, predSr, corrSr is not NULL: a list of length t+1 with the recursively produced values of rob0 — e.g. in case of the ACM filter each element contains a corresponding value of st

rob1L:

if any of the arguments initSr, predSr, corrSr is not NULL: a list of length t+1 with the recursively produced values of rob1 — e.g. in case of the ACM filter each element contains a corresponding value of st

KalmanFilter(Y, a, S, F, Q, Z, V) is a wrapper to recursiveFilter(Y, a, S, F, Q, Z, V).
rLS.AO.Filter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm) and (synonymously) rLSFilter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm) are wrappers to
recursiveFilter(Y, a, S, F, Q, Z, V, initSc=.cKinitstep, predSc=.cKpredstep, corrSc=.cKcorrstep, initSr=.cKinitstep, predSr=.cKpredstep, corrSr=.rLScorrstep, b=b, norm=norm). rLS.IO.Filter(Y, a, S, F, Q, Z, V, b, norm=EuclideanNorm) is a wrapper to
recursiveFilter(Y, a, S, F, Q, Z, V, initSc=.cKinitstep, predSc=.cKpredstep, corrSc=.cKcorrstep, initSr=.cKinitstep, predSr=.cKpredstep, corrSr=.rLS.IO.corrstep, b=b, norm=norm). ACMFilter(Y, a, S, F, Q, Z, V, s0, psi, apsi, bpsi, cpsi, flag) is a wrapper to
recursiveFilter(Y, a, S, F, Q, Z, V, initSc=.cKinitstep, predSc=.cKpredstep, corrSc=.cKcorrstep, initSr=.cKinitstep, predSr=.ACMpredstep, corrSr=.ACMcorrstep, s0, apsi, bpsi, cpsi, flag).

Author(s)

Peter Ruckdeschel Peter.Ruckdeschel@itwm.fraunhofer.de,
Bernhard Spangl bernhard.spangl@boku.ac.at,

References

Martin, R.D. (1979): Approximate Conditional-mean Type Smoothers and Interpolators.
Martin, R.D. (1981): Robust Methods for Time Series.
Martin, R.D. and Thomson, D.J. (1982): Robust-resistent Spectrum Estimation.
Ruckdeschel, P. (2001) Ans\"atze zur Robustifizierung des Kalman Filters. Bayreuther Mathematische Schriften, Vol. 64.

See Also

internalrLS, internalKalman, calibrateRLS utilitiesrobKalman, internalACM, internalpsi

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
require(robKalman)

a0   <- c(1, 0)
SS0  <- matrix(0, 2, 2)
F0   <- matrix(c(.7, 0.5, 0.2, 0), 2, 2)
Q0   <- matrix(c(2, 0.5, 0.5, 1), 2, 2)
TT   <- 100

Z0   <- matrix(c(1, -0.5), 1, 2)
V0i  <- 1
m0c  <- -30
V0c  <- 0.1
ract <- 0.1

X  <- simulateState( a = a0, S = SS0, F = F0, Qi = Q0, tt = TT)
Y  <- simulateObs(X = X, Z = Z0, Vi = V0i, mc = m0c, Vc = V0c, r = ract)
SS <- limitS(S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i)


### calibration b
# by efficiency in the ideal model
# efficiency  =  0.9
(B1 <- rLScalibrateB(eff = 0.9, S = SS, Z = Z0, V = V0i))
# by contamination radius
# r  =  0.1
(B2 <- rLScalibrateB(r = 0.1, S = SS, Z = Z0, V = V0i))

# IO-filter
# by efficiency in the ideal model
# efficiency  =  0.9
(B1.IO <- rLScalibrateB(eff = 0.9, S = SS, Z = Z0, V = V0i, IO = TRUE))
# by contamination radius
# r  =  0.1
(B2.IO <- rLScalibrateB(r = 0.1, S = SS, Z = Z0, V = V0i, IO = TRUE))

erg <- KalmanFilter(Y, a = a0, S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i)

rerg1 <- rLSFilter(Y, a = a0, S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i, b = B1$b)
rerg2 <- rLSFilter(Y, a = a0, S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i, b = B2$b)

rerg1.IO <- rLS.IO.Filter(Y, a = a0, S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i, b = B1.IO$b)
rerg2.IO <- rLS.IO.Filter(Y, a = a0, S = SS0, F = F0, Q = Q0, Z = Z0, V = V0i, b = B2.IO$b)

mean((X[,1,] - rerg1$Xf)^2) ### empirical MSE of the filters considered
mean((X[,1,] - rerg1$Xrf)^2)
mean((X[,1,] - rerg2$Xrf)^2)
### not surprisingly IO-rob filter is not a good idea for AO - situation
mean((X[,1,] - rerg1.IO$Xrf)^2)
mean((X[,1,] - rerg2.IO$Xrf)^2)