Solved – LogLikelihood Parameter Estimation for Linear Gaussian Kalman Filter

algorithmskalman filterlikelihoodmaximum likelihood

I have written some code that can do Kalman filtering (using a number of different Kalman-type filters [Information Filter et al.]) for Linear Gaussian State Space Analysis for an n-dimensional state vector. The filters work great and I am getting some nice output. However, the parameter estimation via loglikelihood estimation is confusing me. I am not a statistician but a physicist, so please be kind.

Let us consider the linear Gaussian State Space model

$$y_t = \mathbf{Z}_{t}\alpha_{t} + \epsilon_{t},$$
$$\alpha_{t + 1} = \mathbf{T}_{t}\alpha_{t} + \mathbf{R}_{t}\eta_{t},$$

where $y_{t}$ is our observation vector, $\alpha_{t}$ our state vector at time step $t$. The quantities in bold are the transformation matrices of the state space model which are set according to the characteristics of the system under consideration. We also have

$$\epsilon_{t} \sim NID(0, \mathbf{H}_{t}),$$
$$\eta_{t} \sim NID(0, \mathbf{Q}_{t}),$$
$$\alpha_{1} \sim NID(a_{1}, \mathbf{P}_{1}).$$

where $t = 1,\ldots, n$. Now, I have derived and implemented the recursion for the Kalman Filter for this generic state space model by guessing the initial parameters and variance matrices $\mathbf{H}_{1}$ and $\mathbf{Q}_{1}$ I can produce plots like

Kalman Filter

where the points are the Nile River water levels for Jan over 100 years, the line is the Kalamn Estimated state, and the dashed lines are the 90% confidence levels.

Now, for this 1D data set the matrices $\mathbf{H}_{t}$ and $\mathbf{Q}_{t}$ are just scalars $\sigma_{\epsilon}$ and $\sigma_{\eta}$ respectively. So now I want to get the correct parameters for these scalars using the output from the Kalman Filter and the loglikelihood function

$$\log L(Y_{n}) = -\frac{np}{2}\log(2\pi) – \frac{1}{2}\sum^{n}_{t = 1}(log|\mathbf{F}_{t}| + v^{\mathsf{T}}_{t}\mathbf{F}_{t}^{-1}v_{t})$$

Where $v_{t}$ is the state error and $\mathbf{F}_{t}$ is the state error variance. Now, here's where I am confused. From the Kalman filter, I have all the information I need to work out $L$, but this seems to get me no closer to being able to calculate the maximum likelihood of $\sigma_{\epsilon}$ and $\sigma_{\eta}$. My question is how can I calculate the maximum likelihood of $\sigma_{\epsilon}$ and $\sigma_{\eta}$ using the loglikelihood approach and the equation above? An algorithmic break down would be like a cold beer to me right now…

Thanks for your time.


Note. For the 1D case $\mathbf{H}_{t} = \sigma^{2}_{\epsilon}$ and $\mathbf{H}_{t} = \sigma^{2}_{\eta}$. This is the univariate local level model.

Best Answer

When you run the Kalman filter as you have, with given values of $\sigma_\epsilon^2$ and $\sigma^2_\eta$, you get a sequence of innovations $\nu_t$ and their covariances $\boldsymbol{F_t}$, hence you can calculate the value of $\log L(Y_n)$ using the formula you give.

In other words, you can regard the Kalman filter as a way to compute an implicit function of $\sigma_\epsilon^2$ and $\sigma^2_\eta$. The only thing that you need to do then is to package this computation into a function or subroutine and handle that function to an optimization routine --like optim in R. That function should accept as inputs $\sigma_\epsilon^2$ and $\sigma^2_\eta$ and return $\log L(Y_n)$.

Some packages in R (e.g. dlm) do this for you (see for instance function dlmMLE).

Edit: Link in one of my comments below, which I cannot edit, is now here.