Solved – Square root algorithm (Kalman Filter)

covariancekalman filtermatrix decompositionr

I am interested in implementing a Kalman Filtering and smoothing procedure in R without relaying on existing (and excellent) packages such as dlm. Hereby I run (not too surprisingly) into numerical problems when computing the covariances in the filtering densities (and therefor also for all connected covariances). To make sure I do not confuse notation, I am working on implementing the following Kalman filtering equations based on the linear state space model of the form
$$a_t = c_t + T_t a_{t-1}+\eta_t $$
and the measurement equation
$$y_t = Z_ta_t + \varepsilon_t$$
where $\eta_t \sim N(0,Q_t)$ and $\varepsilon\sim N(0,H_t)$.

The Kalman-Filter is based on
the

Update equations:
$$a_{t|t-1} = T_t a_{t-1} + c_t \\
P_{t|t-1} = T_t P_{t-1} T_t ' + Q_t \\
F_t = Z_tP_{t|t-1}Z_t ' + H_t$$
Measurement update equation
$$a_t = a_{t|t-1}+P_{t|t-1}Z_t ' F_t^{-1}(y_t – Z_ta_{t|t-1}) \\ P_t = P_{t|t-1} – P_{t|t-1}Z_t F_t^{-1}Z_t'P_{t|t-1}$$

Computing such values using, for instance R is not a problem at all, however, the evaluation of $P_t$ suffers from numerically instability, therefore I would like to implement the square root covariance filter (see Anderson and Murr (Chapter 6) or (this is what I would like to understand) this tutorial (p.3.)
The essential idea behind the algorithm is to compute $P_t = S_t S_t'$, as such a multiplication will always yield a symmetric non negative matrix.
In the paper, it is shown that one can replace the time update equation by constructing an orthogonal matrix $G$ and an upper triangular matrix $M$ such that
$$\begin{pmatrix}M \\0 \end{pmatrix} = G \begin{pmatrix}S_{t-1}'T' \\(Q^{1/2})'\end{pmatrix}.$$
In this case, $M$ is a possible choice for $S_{t|t-1}$.

How can I find these matrices $M$ and $G$?

Best Answer

My first guess would be that the problems you run into do not require the use of a square-root formulation, but simply another formulation of the covariance update step. The usual problem everybody runs into is that the covariance matrices become asymmetric or indefinite. This is easily fixed by, e.g., the Joseph stabilized covariance update, see, e.g., Dan Simon, Optimal State Estimation, p. 129:

enter image description here

If you do not want to use the Joseph stabilized update for whatever reason, a classical hack for ensuring at least symmetry is to do cov_plus = (cov_plus + cov_plus')/2 after the covariance update. This alone already fixes many problems.

I can't remember where I read this, but as I understand it, square-root implementations nowadays are really only useful when you have a limited-precision environment, i.e., some low-precision microcontroller or something like that.

Related Question