kfLGSSM | R Documentation |
Under a Linear Gaussian State Space Model (LGSSM) specification, the function runs an (exact) Kalman forward (marginal) filter, computes the prediction, marginal and joint smoothing densities (the former sometimes referred to as the RST-smoother), and generates an unbiased estimate of the (log-)likelihood.
kfLGSSM(
yObs,
uReg = NULL,
wReg = NULL,
A = NULL,
B = NULL,
C = NULL,
D = NULL,
Q = NULL,
R = NULL,
initX = NULL,
initP = NULL,
initU = NULL,
nSimMF = 1,
nSimPD = 1,
nSimMS = 1,
nSimJS = 1,
computeMFD = TRUE,
computePDD = FALSE,
computeMSD = FALSE,
computeJSD = FALSE,
computeLLH = FALSE
)
yObs |
A matrix or vector of measurements (observations):
If |
uReg |
Matrix (vector) of regressors for the latent state process of
dimension |
wReg |
Matrix (vector) of regressors for the measurement process of
dimension |
A |
Parameter (or system) matrix of dimension |
B |
Parameter (or system) matrix of dimension |
C |
Parameter (or system) matrix of dimension |
D |
Parameter (or system) matrix of dimension |
Q |
Error VCM of state process of dimension |
R |
Error VCM of measurement process of dimension |
initX |
Initial value for state process. Think of |
initP |
VCM matrix initialization for state process; if not specified,
then the prior covariance-matrix under the stationary distribution of the
latent states is used (see |
initU |
Initial values of regressors of the state process specification (cannot be missing). |
nSimMF |
Number of forward filtering runs; defaults to |
nSimPD |
Number of prediction runs; defaults to |
nSimMS |
Number of marginal smoothing runs; defaults to |
nSimJS |
Number of joint smoothing (backward simulation) runs; defaults
to |
computeMFD |
Logical: if |
computePDD |
Logical: if |
computeMSD |
Logical: if |
computeJSD |
Logical: if |
computeLLH |
Logical: if |
The amount of computation can be controlled via the computeXXX
-flags
(e.g. only a forward filter and log-likelihood computations, but skipping
prediction the smoothers). If required the nSimXX
-type arguments set
the number of simulations from above densities; the defaults are set to 1,
but if set to 0, then no simulation is performed and only the means and
variances of the Gaussian distributions of the computeXXX
-quantities
are returned.
The specification of the LGSSM is of the following form:
x_t = Ax_{t-1} + Bu_t + v_t \\
y_t = Cx_t + Dw_t + \varepsilon_t\;,
with serially uncorrelated state innovations v_t\sim\mathcal{N}(0, Q)
and measurement errors \varepsilon_t\sim\mathcal{N}(0,R)
, for
t=1,\ldots,T
. The argument names match the previous equations. If an
argument is missing, the corresponding component is dropped (i.e. set to
zero), and forward filtering, prediction and backward smoothing are run under
this specification as usual.
There can be an initial state (vector) value passed via initX
as well
as an initial "prediction" or VCM matrix at period t=0
via initP
for initialization of the algorithm. If these are not provided, the
stationary prior is used as a default initializer having the following form:
x_0\sim\mathcal{N}\left(Bu_0(I-A)^{-1},\left(I-A\right)^{-1}Q
\left[\left(I-A\right)^{-1}\right]^{\top}\right)\;,
where I
is the
identity matrix, and u_0
is initU
. In the univariate case
(length(initX) = 1
), the prior reduces to
x_0\sim\mathcal{N}\left(\frac{B\cdot u_0}{1-A},
\frac{Q}{(1-A^2)}\right)
, and B\cdot u_0
is the "dot" or scalar
product if the corresponding number of u
-type regressors >1
.
Note that above specification requires an initial regressor (vector) value
initU
if initX
is missing, but can be dropped if initX
is provided.
For
t=0:
compute
\hat{x}_{0|0}
and \hat{x}_{0|0}
according to:
\hat{x}_{0|0} = Bu_0\left(I-A\right)^{-1}
\hat{P}_{0|0} =\left(I-A\right)^{-1}Q
\left[\left(I-A\right)^{-1}\right]^{\top}
For
t=1,\ldots,T:
compute:
K_t
and L_t
K_t = \hat{P}_{t|t-1} C^{\top}
L_t = \left[C\hat{P}_{t|t-1} C^{\top} + R\right]^{-1}
with \hat{x}_{t|t-1}
and \hat{P}_{t|t-1}
as specified
below under prediction
.
for marginal filtering:
\hat{x}_{t|t} = \hat{x}_{t|t-1} + K_tL_t
\left(y_t - C\hat{x}_{t|t-1}-Dw_t\right)
\hat{P}_{t|t} = \hat{P}_{t|t-1} - K_t L_t K_t^{\top}
with \hat{x}_{t|t-1}
and \hat{P}_{t|t-1}
as specified
below under prediction
.
for prediction:
\hat{P}_{t|t-1} = A\hat{P}_{t-1|t-1} A^{\top} + Q
\hat{x}_{t|t-1} = A\hat{x}_{t-1|t-1} + Bu_{t-1}
These recursions are implemented in kfMFPD
.
To obtain the marginal smoothing densities
p\left(x_t|y_{1:T}\right)\forall t=1,\ldots,T
a complete Kalman forward
filtering and prediction pass is needed. The marginal smoothing density for
period T
equals the marginal filtering density for the same period,
i.e. after the Kalman filtering and prediction pass \hat{x}_{T|T}
and
\hat{P}_{T|T}
for p\left(x_T|y_{1:T}\right)
are already given. To
compute the other marginals, the following recursion is run backwards in
time:
For
t=T-1, \ldots,1
compute:
J_t=\hat{P}_{t|t}A^{\top}\hat{P}_{t+1|t}^{-1}
\hat{x}_{t|T}=\hat{x}_{t|t} + J_t \left(\hat{x}_{t+1|T}-
\hat{x}_{t+1|t}\right)
\hat{P}_{t|T}=\hat{P}_{t|t} + J_t\left(\hat{P}_{t+1|T} -
\hat{P}_{t+1|t}\right)J_t^{\top}
Each marginal smoothing density is then given via
p\left(x_t|y_{t:T}\right)=\mathcal{N}\left(x_t|\hat{x}_{t|T},
\hat{P}_{t|T}\right)
.
These recursions are implemented in kfMSD
.
Backward simulation is a strategy for generating realizations of a whole
trajectory \tilde{x}_{0:T}
from the joint smoothing density
p\left(x_{0:T}|y_{1:}\right)
:
Draw nSimJSD samples from
\tilde{x}_T\sim
p\left(x_T|y_{1:T}\right)=\mathcal{N}\left(x_T|\hat{x}_{T|T}|P_{T:T}\right)
i.e. from the last iteration of the forward filter.
Draw nSimJSD samples from backwards in time for
t=T-1,\ldots,0
:
\tilde{x}_t\sim p\left(x_t|\tilde{x}_{t+1},y_{1:t}\right)
=\mathcal{N}\left(x_t|\hat{\mu}_{t},L_t\right)
with means and variances obtained via the following recursions:
J_t=\hat{P}_{t|t}A^{\top}\left(A\hat{P}_{t|t}A^{\top} +
Q\right)^{-1}
\mu_t=\hat{x}_{t|t} + J_t \left(x_{t+1} - Bu_t -A\hat{x}_{t|t}
\right)
L_t=\hat{P}_{t|t} - J_t A\hat{P}_{t|t}
After a complete backward sweep, the nSimJSD
backward trajectories
\tilde{x}_{0:T}^{nSimJS}
are (by construction) a realization from the
above joint smoothing density. These recursions are implemented in
kfJSD
.
The data likelihood (if logarithmized also sometimes simply referred to as
the log-likelihood), with \theta=A, B, C, D, P, Q
, is defined as
p\left(y_{1:T}|\theta\right)\prod_{t=1}^{T} \int p\left(y_t|
\theta,x_t\right)p\left(x_t|y_{t:(t-1)}\right)dx_t=\mathcal{N}
\left(y_t|C\hat{x}_{t|t-1}+Dw_t, C\hat{P}_{t|t-1}C^{\top} + R\right)
with the logarithmic version given as
\log p\left(y_{1:T}\right) =
\sum_{t=1}^T\log\left(\mathcal{N}
\left(y_t|C\hat{x}_{t|t-1}+Dw_t, C\hat{P}_{t|t-1}C^{\top} + R\right)\right)
=
-\frac{Tn_y}{2}\log\left(2\pi\right) + \sum_{t=1}^T
\left(-\frac{1}{2} \log\det\lbrace\Lambda_t\rbrace -
\frac{1}{2} \left(y_t-C\hat{x}_{t|t-1}-Dw_t\right)
\Lambda_t^{-1}
\left(y_t-C\hat{x}_{t|t-1}-Dw_t\right)^{\top}
\right)
with \Lambda_t = C\hat{P}_{t|t-1}C^{\top} + R
. These computations are
implemented in kfLLH
.
A named list of 4:
kfMFDout:
if computeMFD=TRUE
the output produced by
kfMFPD
with PDSTORE=FALSE
(or NULL
if
computeMFD=FALSE
)
kfMPDout:
if computePDD=TRUE
the output produced by
kfMFPD
with PDSTORE=TRUE
(or NULL
if
computePDD=FALSE
)
kfMSDout:
if computeMSD=TRUE
the output produced by
kfMSD
(or NULL
if computeMSD=FALSE
)
kfJSDout:
if computeJSD=TRUE
the output produced by
kfJSD
(or NULL
if computeJSD=FALSE
)
kfLLHout:
if computeMFD=TRUE
the output produced by
kfLLH
(or NULL
if computeMFD=FALSE
)
If simulated data is required, the function
dataGenLGSSM
can be used to simulate from a LGSSM.
To use real data that comes with this package: data(IBMdata)
, see
IBMdata
.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.