How do you derive the eigenvalue and eigenvector of a linear combination of outer products

eigenvalues-eigenvectorslinear algebra

I'm reading a paper that claims that it is easy to verify that the eigenvector-eigenvalue pair of
\begin{align*}
p_1\beta_1\beta_1^\top+p_2\beta_2\beta_2^\top,
\end{align*}

where $p_1$ and $p_2$ are just two non-zero and non-equal constants that sum to one, is
\begin{align*}
(v_1,\lambda_1)&=\left(\frac{\sqrt{\frac{1-\Delta_2}{1+\Delta_2}}\beta_1-\beta_2}{\sqrt{\frac{1+\Delta_2}{2}}+\sqrt{\frac{(1-\Delta_2)(1-\Delta_1)}{2(1+\Delta_2)}}},\frac{1+\sqrt{1-4(1-\langle\beta_1,\beta_2\rangle^2)p_1p_2}}{2}\right)\\
(v_2,\lambda_2)&=\left(\frac{\beta_2+\sqrt{\frac{1+\Delta_2}{1-\Delta_1}}\beta_1}{\sqrt{\frac{1-\Delta_2}{2}}+\frac{1+\Delta_2}{2}\sqrt{\frac{2}{1-\Delta_1}}},\frac{1-\sqrt{1-4(1-\langle\beta_1,\beta_2\rangle^2)p_1p_2}}{2}\right),
\end{align*}

where
\begin{align*}
\Delta_1&=\frac{(\lambda_1-\lambda_2)^2+p_1^2-p_2^2}{2(\lambda_2-\lambda_1)p_1} \\
\Delta_2&=\frac{(\lambda_2-\lambda_1)^2+p_2^2-p_1^2}{2(\lambda_1-\lambda_2)p_2}.
\end{align*}

I've been trying to verify the above statement but am unable to do so. Can anyone guide me as to how I should go about doing this? Thanks.

My attempt: I tried to show that
\begin{align*}
\lambda_1v_1&=(p_1\beta_1\beta_1^\top+p_2\beta_2\beta_2^\top)v_1 \\
\lambda_2v_2&=(p_1\beta_1\beta_1^\top+p_2\beta_2\beta_2^\top)v_2,
\end{align*}

by substituting the equations on the RHS and hoped to simplify to the LHS. However, the equations just got even more complex and I see no way of simplifying them.

Best Answer

You can assume without loss of generality that $\beta_1$ and $\beta_2$ are unit norm (by integrating it into coefficients of $p_1$ and $p_2$).

Looking for a first eigenvalue-eigenvector relationship of this form:

$$(p_1\beta_1\beta_1^\top+p_2\beta_2\beta_2^\top)(a_1\beta_1+a_2\beta_2)=\lambda_1(a_1\beta_1+a_2\beta_2)$$

and denoting by $s$ the dot product of $\beta_1$ and $\beta_2$, we get, by expanding and decomposing onto basis $\beta_1,\beta_2$:

$$\begin{cases}p_1a_1+p_1sa_2&=&\lambda_1a_1\\ p_2a_1s+p_2a_2&=&\lambda_1a_2\end{cases}$$

which means that $\lambda_1$ can be found as (one of the two) eigenvalue(s) of the following matrix:

$$\begin{pmatrix}p_1&p_1s\\ p_2s&p_2\end{pmatrix}$$

and that $(a_1;a_2)$ will be an associated eigenvector to this eigenvalue.

(the second eigenvalue and eigenvector is therefore obtained in this way).

It remains do the last calculations whose resultsare very simple:

$$\begin{cases}\lambda_{1}&=&\dfrac12(p_1+p_2 - \sqrt{\Delta})\\ \lambda_{2}&=&\dfrac12(p_1+p_2 + \sqrt{\Delta})\end{cases} \ \text{with} \ \Delta:=(p_1-p_2)^2 + 4p_1p_2s^2$$

With resp. eigenvectors:

$$V_1=\binom{p_1 - p_2 -\sqrt{\Delta}}{2sp_2}$$

$$V_2=\binom{p_1 - p_2 +\sqrt{\Delta}}{2sp_2}$$

Why all these complicated formulas in the article ? A plausible reason is that they have taken raw results issued from a Computer Algebra System.