Relationship between the eigenvectors and eigenvalues of a non-symmetric projection matrix $D$ and the matrix $DH$ where $H$ is arbitrary.

eigenvalues-eigenvectorslinear algebramatricesprojection-matrices

I have an engineering background and have recently become very interested in trying to extend my knowledge of eigenvectors and Linear Algebra. I have found myself stuck considering the following problem.

Suppose I have a non symmetric $n \times n$ projection matrix $C = A(B^TA)^{-1}B^T$. I'm considering $A$ and $B$ here to be $n\times 1$.
Then, the matrix $D=I-C$ is also a projection with eigenvalues $\{0,1,…,1_{n-1}\}$. I have become interested in the relationship between the eigenvectors of $D$ and the matrix $DH$ where $H$ is an arbitrary $n \times n$ matrix.

I can see that $B$ is in the left nullspace of $D$ and $DH$ so $DH$ has eigenvalues $\{0,\lambda_1,…,\lambda_{n-1}\}$ where $\lambda_i$ may also be zero.

If $DH$ has a non zero eigenvalue $\lambda$ and I pre multiply $DHv = \lambda v$ by $D$ I can show that
$$\begin{align} DDHv &= D{\lambda}v \\\\
DHv &= {\lambda} Dv \\\\
\frac{DHv}{\lambda} &= Dv \\\\
\frac{{\lambda}v}{\lambda} &= Dv \\\\
v &= Dv
\end{align}$$

So if I were to consider the matrix $DH + D$ when every $\lambda_i \neq 0$ then it would have eigenvalues $\{ 0,\lambda_{1} +1,..,\lambda_{n-1} +1\}$. However, I believe (I have tested examples in matlab) that if $\lambda_i=0$ where $\lambda_i$ is not the always present $0$ eigenvalue associated with the left nullspace, then the associated $v_i$ is still an eigenvector of $D$ associated with a $1$ eigenvalue and the eigenvalues of $DH+D$ are $\{ 0,\lambda_{1} +1,..,\lambda_{n-1} +1\} $ in general. Any help I could get in a better way to prove this would really be appreciated, I would really like to fully understand this before I move on!

Best Answer

Before answering the question (see the EDIT section), here are two things I think might be worth noting, given that you are interested in solidifying your understanding of projection matrices. First, every projection matrix $P$'s eigenvalues are either $0$ or $1$. This fact can be proven using minimal polynomials (by observing $P^2 - P = 0$). Nevertheless, this fact has an intuitive explanation as well: the only eigenvectors to a projection onto some linear subspace $V$ are those in $V$ (which remain as themselves) and some other vectors which are annihilated. No other vectors can be eigenvectors, as the end result of applying $P$ is always a vector in $V$.

The second thing to note is that as $A$ and $B$ are simply column vectors, your projection matrix $C$ is rank one (and can be understood as a projection onto a one-dimensional linear subspace). As such, $C$'s eigenvalue of $0$ has multiplicity $n - 1$, while the eigenvalue $1$ has multiplicity $1$. It then follows that $I - C$ has the multiplicities of its eigenvalues flipped: $n - 1$ eigenvectors with eigenvalue one, and one with eigenvalue zero.

EDIT: I think I now understand your question. The following statement is true, and I believe it should answer your question:

Suppose the eigenvalues of $DH$ are $\{0, 0, \cdots, \lambda_{k + 1}, \cdots, \lambda_n\}$, where all $\lambda_i$ are not equal to zero and the zero eigenvalue has multiplicity $k$. Then the eigenvalues of $DH + D$ are $\{0, 1, 1, \cdots, \lambda_{k + 1} + 1, \cdots, \lambda_n + 1\}$, where the multiplicity of $1$ as an eigenvalue is $k - 1$. Furthermore, there exist $k - 1$ linearly independent simultaneous $1$-eigenvectors $\{v_1, \cdots, v_{k-1}\}$ of $DH$, $D$, and subsequently, $DH + D$.

Indeed this statement is true; it can be proven using dimension counting, which states that for any two linear subspaces $V$ and $W$, we have $$\dim(V) + \dim(W) = \dim(V \oplus W) + \dim(V \cap W)$$ If we take $V$ to be the column space of $D$ (which is known to have dimension $n - 1$, see above) and $W$ to be the null space of $DH$ (which by assumption has dimension $k$), then it follows that $$\dim(V \cap W) = \dim(V) + \dim(W) - \dim(V \oplus W) \geq k - 1$$ as $V \oplus W$ has dimension at most $n$, while $\dim(V) = n - 1$. Hence, there exist at least $k - 1$ linearly independent vectors in $V \cap W$. But vectors $v \in V \cap W$ are exactly vectors which satisfy $Dv = v$ (because any vector in the column space of $D$ is a $1$-eigenvector of $D$, why?) and $DHv = 0$. As you've already observed, there must be at least one zero eigenvalue, and the rest of the eigenvalues are $\{\lambda_{k+1} + 1, \cdots, \lambda_n + 1\}$ (where $\lambda_i + 1 \neq 1$ for every $\lambda_i$), so this actually forces the number of $1$-eigenvectors to be exactly $k - 1$, and the statement is proven. $\square$