Minimize the Rayleigh quotient to find the smallest eigenvalue during Lanczos iteration

eigenvalues-eigenvectorsnumerical linear algebranumerical optimizationoptimization

I am interested in finding the largest and smallest eigenvalue of a massive symmetric matrix in order to calculate its condition number. I read that the Lanczos iteration can be used for that purpose. After reading Wikipedia as well as the lecture notes from UTexas on Lanczos iteration, I am still unclear how one solves the subproblem of minimizing $\lambda_{min}=\min_{z \in \mathcal{K}} r(z)$, where $r$ is the Rayleigh quotient and $\mathcal{K}$ is the Krylov subspace.

Can someone please walk me through how that subproblem is solved? In particular, does one use gradient descent for that or is there a more efficient way? How can solving that minimization subproblem at $k^{th}$ Lanczos iteration be used for solving at the $(k+1)^{th}$ iteration? Thanks so much!

Best Answer

Suppose $Q$ is the orthonormal Krylov basis and $T$ is the corresponding tridiagonal matrix. Then $Q^TQ=I$ and $Q^TAQ = T$.

Any vector in Kyrlov subspace can be written $Q c$, where $c$ gives the coefficients for a linear combination of the column of $Q$. Thus, $$ \min_{x\in\mathcal{K}} \frac{x^TAx}{x^Tx} = \min_{c\in\mathbb{R}^k} \frac{(Qc)^TA(Qc)}{(Qc)^T(Qc)} = \min_{c\in\mathbb{R}^k} \frac{c^T(Q^TAQ)c}{c^T(Q^TQ)c} = \min_{c\in\mathbb{R}^k} \frac{c^TTc}{c^Tc}. $$ But the rightmost expression is simply the smallest eigenvalue of $T$. The same approach holds for the large eigenvector.

Related Question