[Math] Largest eigenvalue of a symmetric positive definite matrix with rank-one updates

linear algebramatrices

I have a $n \times n$ symmetric positive definite matrix $A$ which I will repeatedly update using two consecutive rank-one updates of the form

$A' = A + e_j u^T +u e_j^T$

where $\{e_i: 1 \leq i \leq n\}$ is the standard basis.

I also compute the updates to $A^{-1}$ using Sherman-Morrison. Due to the nature of the updates, the matrix $A'$ is guaranteed to be non-singular and positive definite.

I would like to keep track of the largest and smallest eigenvalue of the matrix. Since I have the inverse, a method for calculating the largest (or smallest) eigenvalue would suffice.

I know I can calculate the eigendecomposition of $A$ and update it in $O(n^2)$ but I was wondering if there was a more efficient method seeing as I only care about one particular eigenvalue (and not at all about the eigenvectors).

A lower bound on the eigenvalue, might also be helpful, but it would have to be tight. Gershgorin discs seem too loose.

Finally, if I do have to go via the eigendecomposition route, any pointers to what algorithms are used in practice for computational efficiency and numerical stability?

Best Answer

You could use randomized SVD to get the dominant eigenvectors of $A'$ and $(A')^{-1}$ through application of them to a handful of random test vectors. It's a probabilistic method, but there are rigorous bounds on the failure probability in terms of the number of test vectors, and you don't have to use very many test vectors before the probability of failure becomes absurdly small like $1e-10$.

You can just keep the same random test vectors from step to step and then you don't have to reapply the original matrix $A$ in subsequent steps.