Linear Span of Special Orthogonal Matrices – Comprehensive Analysis

matricesmatrix analysismatrix-theoryorthogonal matrices

(Disclaimer : I know very well that $SO(N)$ has a Lie algebra of dimension $N(N-1)/2$ etc. This absolutely not the point of my question.)

To make my problem more understandable, I start with the example of $SO(2)$. All $SO(2)$ matrices $M$ can be written as ($\theta\in [0,2\pi[$)
$$
M=\begin{pmatrix}\cos\theta & \sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}.
$$
Using the basis of $2\times2$ real matrices
$\sigma_0=\begin{pmatrix}1 & 0\\0&1\end{pmatrix}$, $\sigma_1=\begin{pmatrix}0 & 1\\1&0\end{pmatrix}$,
$\sigma_2=\begin{pmatrix}0 & 1\\-1&0\end{pmatrix}$,
$\sigma_3=\begin{pmatrix}1 & 0\\0&-1\end{pmatrix}$,
one find that

$$M=\cos\theta\;\sigma_0+\sin\theta\;\sigma_2.$$
Clearly, $M$ does not have components along $\sigma_1$ and $\sigma_3$, so the dimension of the smallest linear subspace of $\mathrm{M}_2(\mathbb{R})$ that contains $SO(2)$ is $2$.

How to articulate the reasoning (for the cases $N>2$ in particular) is not completely clear. I guess that we can say that the component along $\sigma_0$ and $\sigma_2$ are independent because $\cos\theta$ and $\sin\theta$ are independent functions (in a functional analysis sense).

Assuming that made sense, we can try to increase $N$. For example, an $SO(3)$ matrix can be written as
$$
M=\left(\begin{matrix}
\cos\varphi\cos\psi – \cos\theta\sin\varphi\sin\psi & -\cos\varphi\sin\psi – \cos\theta\sin\varphi\cos\psi & \sin\varphi\sin\theta\\
\sin\varphi\cos\psi + \cos\theta\cos\varphi\sin\psi & -\sin\varphi\sin\psi + \cos\theta\cos\varphi\cos\psi & -\cos\varphi\sin\theta\\
\sin\psi\sin\theta & \cos\psi\sin\theta & \cos\theta
\end{matrix}\right)\,
$$
with $(\phi,\psi)\in [0,2\pi[^2$ and $\theta\in [0,\pi[$. Now, if I look at each matrix element one by one, they are all independent in a functional sense.$^*$ Does that mean that the ''dimension of the matrix space'' that $SO(3)$ matrices live in is $9$ ? Is there a way to generalize that to arbitrary $N$ ?

In the end, does any of what I wrote above make any sense ?


$^*$ It is slightly more subtle than that for the $\theta$ dependence, because in the end I am interested in doing integral over the Haar measure, which means that one should look at $x=\cos(\theta)\in[-1,1]$. But $x$ and $\sqrt{1-x^2}$ are orthogonal, so all should be fine.

Best Answer

Elementary proof. The linear space $E$ spanned by $SO_n$ is the orthogonal of those matrices $M$ such that $\langle M,Q\rangle:={\rm Tr}(MQ)=0$ for every $Q\in SO_n$. Let $M=SR$ be a polar decomposition, where $S\in Sym_n^+$ and $R\in O_n$. This decomposition is unique with $S\in SPD_n$ if $M$ is non-singular, but in general it exists and might be non-unique. If $R\in SO_n$, then ${\rm Tr}(SQ)=0$ for every $Q\in SO_n$ ; chosing $Q=I_n$, we have ${\rm tr}\,S=0$, which implies $S=0_n$. If on the contrary $R\in O_n^-$, we have ${\rm Tr}(SQ)=0$ for every $Q\in O_n^-$. Diagonalize $S$ in an orthogonal basis, the matrix of $D$ eigenvalues satisfies ${\rm Tr}(DQ)=0$ for every $Q\in O_n^-$. Chosing $Q$ the symmetry with respect to hyperplane $x_j=0$, we obtain ${\rm Tr}\,D=2d_j$ for every $j$. There follows $n\,{\rm Tr}\,D=2\,{\rm Tr}\,D$. Whence ${\rm Tr}\,D=0$, $d_j=0$ if $n\ge3$. This yields $M=0_n$, hence $E$ is the full space $M_n$.

When $n=2$, this gives only $d_1=d_2$ and we recover $M\in O_2^-$.

Related Question