[Math] Finding the smallest eigenvalues of a large, but structured, matrix

na.numerical-analysisnumerical linear algebra

I'm trying to find the eigenvector corresponding to the second smallest eigenvalue of a large $(4,000,000 \times 4,000,000)$ matrix $M$. $M$ is a Laplacian matrix, and it has the following structure: $M = D – AA'$, where $D$ is a diagonal matrix and $A$ is a large sparse matrix. $A$ has dimension $4,000,000 \times 10,000,000$, but only about $40,000,000$ non-zero entries. So I can rapidly perform matrix-vector multiplication: $Mv = Dv – A(A'v)$.

Currently I'm using Scipy (which calls ARPACK) to find the smallest eigenvalues and corresponding eigenvectors of $M$. The implementation is a variant of Lanczos method, as far as I can tell. Unfortunately the implementation fails to converge with some frequency. Even when it does converge, since I need fairly high accuracy, it takes many iterations to converge. Any suggestions?

Lanczos method is much more reliable when finding the largest eigenvalues and the corresponding eigenvectors, as I understand it. So I was thinking that I could transform the problem, perhaps instead finding the largest eigenvalues of M's left inverse. But, even if I somehow manage to find a good approximation of M's left inverse (most methods don't converge), the largest eigenvalue is infinite (the smallest eigenvalue of $M$ — a Laplacian matrix — is $0$), so it seems there would still be convergence issues, since I can't solve for the second largest eigenvalue without also solving for the largest, as I understand it. Is there an easier way?

Best Answer

Well, if you add $c I$ to your matrix, for some reasonable value of $c,$ it will become nonsingular. As for the left inverse, since your matrix is sparse, to compute the backward iteration you can use the conjugate gradient method, which will be very fast (there are fancier "Krylov space" methods, but you need not go there, unless conjugate gradient fails).

Related Question