[Math] Inverting a covariance matrix numerically stable

linear algebrana.numerical-analysis

Given an $n\times n$ covariance matrix $C$ where $n$ around $250$, I need to calculate $x\cdot C^{-1}\cdot x^t$ for many vectors $x \in \mathbb{R}^n$ (the problem comes from approximating noise by an $n$-dimensional Gaussian distribution).

What is the best way (in the sense of numerically stable) to solve these equations?

One option is the Cholesky Decomposition, because it is numerically quite stable and fast. Is the higher computational complexity of the Singular Value Decomposition worth it? Or is there another better possibility?

Best Answer

Cholesky sounds like a good option for the following reason: once you've done the Cholesky factorization to get $C=LL^T$, where $L$ is triangular, $x^TC^{-1}x = ||L^{-1}x||^2$, and $L^{-1}x$ is easy to compute because it's a triangular system. The downsides to this are that even if $C$ is sparse, $L$ is probably dense, and also that you do the same amount of work for all $C$ and $x$ while other methods may allow you to exploit some special structure and get good approximations to the solution with less work. For those reasons, you might also consider Krylov subspace methods for computing $C^{-1}x$, like conjugate gradients (since $C$ is symmetric and positive definite), especially if $C$ is sparse. $n=250$ isn't terribly large, but still large enough that Krylov subspace methods could pay off if $C$ is sufficiently sparse. (There might actually be special methods for computing $x^TC^{-1}x$ itself as opposed to $C^{-1}x$, but I don't know of any.)

Edit: Since you care about stability, let me address that: Cholesky is pretty stable, as you note. Conjugate gradients is notoriously *un*stable, but it tends to work anyway, apparently.