[Math] How to get eigenvectors from eigenvalues

eigenvalues-eigenvectorslinear algebra

I'm not seeing where the eigenvectors are in terms of eigenvalues in the paper that came out recently named "Eigenvectors from Eigenvalues" by the neutrino physicists and Terence Tao:

https://arxiv.org/pdf/1908.03795.pdf

I'm guessing that with as fundamentally groundbreaking as this paper is, some people on here have read it. Please let me know if you know where in the paper the eigenvectors are in terms of eigenvalues.

Edit: To be more specific, I saw the norm squared part, but how do you narrow it down to the actual value of each element of each eigenvector. For any reals, you can just act the transformation on the at most the finite $2^n$ possibilities (unless it has infinite dimensions as can be the case in particle physics), where n is the dimension of the vector. But, what about when the values that the norm square is being taken of are complex?
Also, if you can address infinite dimensions, that would be appreciated. Meaning, is there a way to get the elements without testing each combination?

Best Answer

By formula $7$ in that paper, they prove that $$ \operatorname{adj}(\lambda_i I_n - A) = \prod_{k\neq i} (\lambda_i - \lambda_k) \operatorname{proj}_{v_i}, $$ where $\operatorname{proj}_{v_i}$ is the orthogonal projection from $\mathbb{C}^n$ onto the complex line $\mathbb{C}v_i$. So assuming $\lambda_i$ to be different from $\lambda_k$, for $k \neq i$, which would be the case for instance if all the eigenvalues are distinct, then this implies that the image of the left-hand side is $\mathbb{C}v_i$. So taking any non-zero column from the left-hand side and normalizing would give you the eigenvector $v_i$, the latter being determined up to a phase factor.

I must admit that, while this does recover the eigenvector $v_i$ (up to a phase factor), it involves not only knowing the eigenvalue $\lambda_i$, but also computing the adjugate on the left-hand side.

This is one way to answer your question. Another way is that one does recover the modulus squared of any element of each eigenvector by knowing not only the eigenvalues of $A$, but also the eigenvalues of each matrix $A_i$, for $1 \leq i \leq n$, where $A_i$ is the $n-1$ by $n-1$ matrix obtained from $A$ by deleting the $i$-th row and $i$-th column from it. Lemma $2$ in that paper does just that, and is (I suppose) the reason for the title of that article.

Related Question