How to show that 0 is not an eigenvalue of the Hamiltonian matrix

block matricescontrol theoryeigenvalues-eigenvectorslinear algebramatrix-calculus

I have the following Hamiltonian matrix

$$
\begin{align}
H = \begin{pmatrix}A&-BR^{-1}B^{\intercal}\\-C^{\intercal}C&-A^{\intercal}\end{pmatrix}
\end{align}
$$

that admits eigenvalues in pairs, i.e. if $\lambda$ is an eigenvalue, then so is $-\lambda$. Now, $H$ is clearly a block-matrix and has has the dimension $H\in\mathbb{R}^{2n\times2n}$ (each of the blocks is in $\mathbb{R}^{n\times n}$). For this case we have that $R$ is positive definite and the special relations that

$$
\begin{align}
\Gamma = \begin{bmatrix}B&AB&A^{2}B&A^{3}B&\dots&A^{n-1}B\end{bmatrix}\quad \text{and}\quad \Omega =\begin{bmatrix}C\\CA\\CA^{2}\\CA^{3}\\\vdots\\CA^{n-1}\end{bmatrix}
\end{align}
$$

are of full rank. In control theory, or systems theory, we say that $(A,B)$ is reachable and $(C,A)$ is observable when $\Gamma$ and $\Omega$ are of full rank, respectively (just a side note). Since my knowledge of matrix algebra and linear algebra is weak, I have yet to come up with any fruitful approach that would solve this problem. I have tried assuming, for contradiction, that 0 is an eigenvalue, which implies that

$$
\begin{align}
Hx = 0
\end{align}
$$

where $x$ is the corresponding eigenvector, but that is about as far as that takes me. Can I use, somehow, that the eigenvalues come in pairs? Is it not the case that $H^{\intercal}H$ shares eigenvalues with $H$, although they are called singular values?

Edit:

Proof that eigenvalues come in pair:

$$
\begin{align*}
\lambda\begin{bmatrix}v\\w\end{bmatrix} &= \begin{bmatrix}A&-BR^{-1}B^{\intercal}\\-C^{\intercal}C&-A^{\intercal}\end{bmatrix}\begin{bmatrix}v\\w\end{bmatrix} = \begin{bmatrix}Av-BR^{-1}B^{\intercal}w\\-C^{\intercal}Cv-A^{\intercal}w\end{bmatrix}
\\ & \implies \left\{\begin{matrix}Av-BR^{-1}B^{\intercal}w=\lambda v\\-C^{\intercal}Cv-A^{\intercal}w=\lambda w \end{matrix}\right.
\end{align*}
$$

We investigate what happens when we right-multiply $H^{\intercal}$ by the vector $\begin{bmatrix}w&-v\end{bmatrix}^{\intercal}$:

$$
\begin{align*}
H^{\intercal}\begin{bmatrix}w\\-v\end{bmatrix}&=
\begin{bmatrix}A^{\intercal}&-C^{\intercal}C\\-BR^{-1}B^{\intercal}&-A\end{bmatrix}\begin{bmatrix}w\\-v\end{bmatrix} = \begin{bmatrix}A^{\intercal}w + C^{\intercal}Cv\\-BR^{-1}B^{\intercal}w +Av\end{bmatrix}
\\ & = \begin{bmatrix}-\lambda w\\\lambda v\end{bmatrix}= -\lambda \begin{bmatrix} w\\- v\end{bmatrix}
\end{align*}
$$

Since $H$ and $H^{\intercal}$ share eigenvalues (same characteristic polynomial) we have showed what we wanted.

Edit:

The Hamiltonian matrix is associated with the algebraic Riccati equation:

$$
\begin{align*}
A^{\intercal}P+ PA – PBR^{-1}B^{\intercal}P + C^{\intercal}C = 0.
\end{align*}
$$

Here $P\in\mathbb{R}^{n\times n}$ is a symmetric positive definite solution that can be shown to exist and be unique when $\Gamma$ and $\Omega$ are of full rank. Note that $C^{\intercal}C \geq 0$ (symmetric positive semidefinite).

Best Answer

Suppose that $P$ is positive definite and solves the CARE. It is known that the feedback control law $u = -R^{-1}B^TP$ minimizes the cost function $$ \int_0^\infty (x^TC^TCx + u^TRu)\,dt, $$ and that this minimal cost is finite. Define $K = A - BR^{-1}B^TP$. The feedback input/state system is governed by $$ \frac {dx}{dt} = Kx. $$ Because the cost-function integral converges, it must be the case that $\lim_{t \to \infty} x(t) = 0$ for any initial state $x(0)$. This can only be the case if $K$ is stable, which is to say that the eigenvalues of $K$ have negative real part.

On the other hand, we find that $$ PK = -PBR^{-1}B^TP + PA = -C^TC - A^TP \implies\\ H \pmatrix{I\\P} = \pmatrix{A & -BR^{-1}B^T\\-C^TC & -A^T} \pmatrix{I \\ P} = \pmatrix{K\\PK} = \pmatrix{I\\P} K. $$ Let $D$ denote the Jordan form of $K$, and let $X$ be such that $K = XDX^{-1}$. We have $$ H \pmatrix{I\\P} = \pmatrix{I\\P} XJX^{-1} \implies H \pmatrix{X\\PX} = \pmatrix{X\\PX} D. $$ In other words, each eigenvalue of $K$ is also an eigenvalue of $H$, and the associated eigenvectors are the columns of $\pmatrix{X\\PX}$.

Thus, $H$ has $n$ eigenvalues with negative real part. Because the eigenvalues of $H$ come in $\pm $ pairs, it must hold that $H$ has $n$ more eigenvalues with positive real part. Thus, all eigenvalues of $H$ have either negative or positive real part, which means that none of them can have zero real part.

In particular, we conclude that none of the eigenvalues of $H$ can be $0$.

Related Question