[Math] approximating diagonal of inverse sum of low rank and diagonal matrices

approximationlinear algebramatricesnumerical linear algebranumerical methods

I was wondering if there is any theorem or algorithm to approximate the diagonal elements of the inverse of sum of low rank symmetric positive semi-definite and non-negative diagonal matrix.

Let me be more specific. Let say I have: $\mathbf{M} = \Lambda + diag(\mathbf{q})$, where $\Lambda$ is a low rank symmetric positive semi-definite and $diag(\mathbf{q})$ is a diagonal matrix whose diagonal elements are specified by the non-negative vector $\mathbf{q}$. I would like to approximate $(\mathbf{M}^{-1})_{ii}$, diagonal elements of the inverse of $\mathbf{M}$.

Q: Is there any way to approximate it efficiently? I can come up with a linear that is very large and if I solve it, I can find the diagonal elements of $\mathbf{M}^{-1}$, but is a trivial one!

Isn't it a common problem for people who find to find pre-conditioning for solving large linear system. I could not find anything. My problem is not pre-conditioning. I am actually interested to find diagonal elements of $\mathbf{M}^{-1}$.

Thanks,

Best Answer

The general steps is to compute an expression for the inverse using the Woodbury identity and then computing the diagonal entries of the inverse.

The answer can be done in three steps: First apply the Sherman-Morrison-Woodybury update (http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula, http://en.wikipedia.org/wiki/Woodbury_matrix_identity) to the matrix $M$. I will assume that $M = D + UU^T$. Applying the identity

$ M^{-1} = D^{-1} - D^{-1}U (I + U^TD^{-1}U)^{-1}U^TD^{-1} $

Second, Compute the Cholesky decomposition of $ F = (I + U^TD^{-1}U)^{-1} = RR^T $, where $R$ is a lower triangular matrix. Then, define $V = D^{-1}UR$, so that

$M^{-1} = D^{-1} - VV^{T}$

Finally, $diag(M^{-1})$ can be obtained as the sum of the inverse of a diagonal matrix and the the sum of a low rank matrix. Using MATLAB-like notation

$[M^{-1}]_{ii} = [D^{-1}]_{ii} - V(i,:)V(i,:)^T$

The resulting computation is ${\cal O}(k^2n + k^3)$, where $k$ is the rank of the low-rank part and $n$ the size of the matrix.

I have assumed here that all the inverses exists. However, this method might not be the most numerically stable. For stability issues with the update, see

A note on the stability of solving a rank-p modification of a linear system by the Sherman-Morrison-Woodbury formula, EL Yip, SIAM Journal of Scientific Computing, 1986.

Related Question