Eigenvalues 1 and -1 – Why for This Matrix?

eigenvalueseigenvectorlie-algebraslinear algebramatrices

This is a subject I've been working on for a very long time now, but still did not manage to fully understand the interesting properties of this matrix $\mathbf{A}$.

First, let's define two matrices:

  • $\mathbf{N}$ is the following matrix:
    \begin{equation}
    \mathbf{N}=\begin{bmatrix} \mathbf{I}_n & \mathbf{0}_n \\ \mathbf{0}_n & \mathbf{P}^{-1}\begin{bmatrix}1 & && \\ & \ddots && \\ & & 1& \\ &&& -1 \end{bmatrix}\mathbf{P} \end{bmatrix}\in\mathbb{R}^{2n\times2n}
    \end{equation}
    where $\mathbf{P}\in\mathbb{R}^{n\times n}$ is any invertible matrix.

  • with $\omega_i>0$ and $t>0$, the block-diagonal matrix:
    \begin{equation}
    \mathbf{S}(t)=\begin{bmatrix} \begin{bmatrix} \cos(\omega_1t) & \\& \ddots & \\ & & \cos(\omega_n t) \end{bmatrix} & \begin{bmatrix} \dfrac{\sin(\omega_1t)}{\omega_1} & \\& \ddots & \\ & & \dfrac{\sin(\omega_nt)}{\omega_n} \end{bmatrix} \\
    \begin{bmatrix} -\omega_1 \sin(\omega_1t) & \\& \ddots & \\ & & -\omega_n\sin(\omega_n t) \end{bmatrix} & \begin{bmatrix} \cos(\omega_1t) & \\& \ddots & \\ & & \cos(\omega_n t) \end{bmatrix}\end{bmatrix}\in\mathbb{R}^{2n\times2n}
    \end{equation}

The eigenvalues of $\mathbf N$ are of course 1 (multiplicity $2n-1$) and $-1$ (multiplicity $1$). The eigenvalues of $\mathbf{S}(t)$, which is an exponential matrix, are the $n$ couples of the complex conjugates $(\exp(i\omega_jt),\overline{\exp(i\omega_jt)})$.

Now, we can define $\forall t>0$, $\mathbf{A}(t)=\mathbf N\mathbf S(t)$. We know that the product of the eigenvalues of $\mathbf{A}(t)$ is the product of those of $\mathbf{N}$ and $\mathbf S(t)$, i.e. $-1$.

I observe an interesting property but can't prove where it stems from:

  • $1$ and $-1$ are eigenvalues of $\mathbf{A}(t)$ ($\forall t$);
  • $1$ and $-1$ are $\color{red}{\text{not}}$ eigenvalues of $\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2$, except maybe for specific values of $\mathbf P$ and $\omega_k$);
  • $1$ and $-1$ are eigenvalues of $\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2,t_3$);
  • $1$ and $-1$ are $\color{red}{\text{not}}$ eigenvalues of $\mathbf{A}(t_4)\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ ($\forall t,t_2,t_3,t_4$, except maybe for specific values of $\mathbf P$ and $\omega_k$);
  • $\dots$

I managed to prove $1$ and $-1$ are eigenvalues of $\mathbf{A}(t)$ by considering $\mathbf{S}(t)\pm\operatorname{diag}(1,\dots,1,-1,\dots,1)$, calculating its kernel, and building the appropriate vectors (without having to calculate them explicitly).

Also, I understand that the product of the eigenvalues of $\mathbf{A}(t_2)\mathbf{A}(t)$ is 1, while that of $\mathbf{A}(t_3)\mathbf{A}(t_2)\mathbf{A}(t)$ is -1, but that does not prove anything.


Questions

1) Any suggestion to prove the framed observation would be very welcome: why are apparently 1 and -1 eigenvalues of $\prod_{i=1}^m A(t_i)$ if and only if $m$ is odd?

2) Also, I have the impression that there exists a powerful mathematical framework to study these matrices, but I can't figure out which one, as not being a mathematician; Lie algebra because $\mathbf S(t)$ is an exponential? Galois groups because the eigenvalues are complex conjugate? Zariski topology because @loup blanc mentioned it (see end of answer)?


A simple Mathematica file to reproduce the results is available here. Just play with the arguments of calculateEigenvals to change the dimension $n$ or/and the exponent $m$ (to prove: 1,-1 eigenvalues iff $m$ is odd).

Note that I have already asked the question on math.SX but have not got any answer.

Best Answer

First, one should conjugate all matrices by $$ \begin{pmatrix} \operatorname{diag}(\omega_1,\dots,\omega_n) & 0 \\ 0 & 1 \end{pmatrix} $$ as this converts $S(t)$ to a rotation matrix while leaving the reflection $N$ unchanged.

The matrix $P^{-1} \operatorname{diag}(1,\dots,1,-1) P$ has a line as its -1 eigenspace and a hyperplane as its +1 eigenspace. By a limiting argument we may take $P$ to be in general position, so that the -1 eigenspace is not contained in any coordinate hyperplane and the +1 eigenspace does not contain any coordinate line. Then after applying a further conjugation by a diagonal matrix in the lower $n \times n$ block, we can arrange for the -1 and +1 eigenspaces to be orthogonal, without affecting $A(t)$.

After all these conjugations, $S(t)$ is now orthogonal and orientation preserving, while $N$ is orthogonal and orientation reversing. Then it is now clear that the product of any odd number of the $A(t)$ will be orientation-reversing orthogonal matrices. Such matrices have spectrum on the unit circle, symmetric with respect to conjugation, and multiplying to -1, hence must have an odd number of eigenvalues at +1 and also at -1. (If instead one multiplies an even number of $A(t)$ together, one obtains an even number of eigenvalues at -1 and at +1, but usually one expects to get zero eigenvalues at either.)

Related Question