Rotation of axes in arbitrary dimension

change-of-basislinear algebrarotations

Given two orthonormal bases $(a_{1}, a_{2})$, $(b_{1}, b_{2})$ of $\mathbb{R}^{2}$, we can express one basis in terms of the other via a rotation angle $\theta$, i.e., we can write

$$\begin{bmatrix}
b_{1}\\
b_{2}
\end{bmatrix}
=
\begin{bmatrix}
\cos \theta & \sin \theta\\
-\sin \theta & \cos \theta
\end{bmatrix}
\begin{bmatrix}
a_{1}\\
a_{2}
\end{bmatrix}.
$$

I am wondering whether it is possible to generalize this formula to arbitrary dimension. Namely, suppose that we have two orthonormal bases $(a_{1}, \dotsc, a_{n})$, $(b_{1}, \dotsc, b_{n})$ of $\mathbb{R}^{n}$. Is it then possible to express one basis in terms of the other and a set of rotation angles? If so, how?

EDIT: My first sentence is wrong unless the two bases have the same orientation.

Best Answer

The best generalization of a rotation in higher dimensions is an orthogonal matrix $A$. It's important to note that such matrices might include a reflection that swaps chirality of the basis (right-handed to left-handed and vice-versa). This is easily characterized by the sign of the determinant (see below). Such a matrix really should be called an orthonormal matrix, but the naming is historical. Such a matrix preserves a non-degenerate symmetric bilinear form (equivalently, by the polarization identity a non-degenerate quadratic form) on $\mathbb{R}^n$. In other words, it preserves distances and hence angles between vectors: $$ \| Av \| = \| v \| \quad\text{and}\quad \langle Av, Aw \rangle = \langle v, w \rangle $$ for all $v, w \in \mathbb{R}^n$.

These matrices form a group called the orthogonal group $\operatorname{O}(n) = \operatorname{O}(n, \mathbb{R})$, which is a subgroup of the general linear group $\operatorname{GL}(n) = \operatorname{GL}(n, \mathbb{R})$, consisting of all invertible matrices.

If we assume that the form is the standard dot product of vectors in $\mathbb{R}^n$, then the condition that characterizes orthogonal matrices is: $$ A^\top A = I, $$ where $A^\top$ is the matrix transpose and $I$ is the $n \times n$ identity matrix. Equivalently, this means that for orthogonal matrices, $$ A^{-1} = A^\top. $$ Also, as a consequence $(\det A)^2 = 1$, so $\det A = \pm 1$.


If $A = (a_1, \dots, a_n)$ and $B = (b_1, \dots, b_n)$ are two orthonormal bases, then they are orthogonal matrices, when the vectors are considered as columns. This also means that, e.g., $A$ is the change-of-basis matrix from coordinates with respect to the $A$-basis to coordinates with respect to the standard basis ($I$-basis). Hence, the matrix product $B^{-1} A = B^\top A$ converts, $A$-coordinates to $B$-coordinates, and is also a member of $\operatorname{O}(n)$.

What does such a matrix look like? A composition of rotations and possibly a reflection (if $\det A = -1$ meaning that $A$ is orientation-reversing). Rotations happen in $2$-dimensional subspaces, so if we define $k \in \mathbb{N}$ by $$ k = \biggl\lfloor \frac{n}{2} \biggr\rfloor, $$ i.e. $n = 2k$ if $n$ is even and $n = 2k + 1$ if $n$ is odd. There is always an orthonormal basis of $\mathbb{R}^n$ such that the your given orthogonal matrix is similar to an orthogonal matrix with at most $k$ many $2 \times 2$ rotation blocks (such as in your question statement) and the rest diagonal elements $\pm 1$, fixing vectors or reflecting them to their negatives. If you insist, you can take a pair of $+1$ or $-1$ diagonal entries and consider them to be a $2 \times 2$ block as well with rotation angle of $0$ or $\pi$, respectively. This is a canonical form for elements of $\operatorname{O}(n)$. In the special case $n=3$ with orientation-preserving change-of-basis, we have Euler's rotation axis theorem.

Putting this all together, if $C = B^\top A$ is the change-of-basis matrix, then we can find invertible $P \in \operatorname{O}(n)$ such that $P^\top C P = (P^\top B^\top) (A P) = (B')^\top (A')$ is in this canonical form, consisting of a composition of $k$ rotations and zero or one reflection.

Related Question