It's clear geometrically that if you have two vectors in $\mathbb{R}^3$ a rotation will preserve their lengths and the angle between them. But how do you visualize that a rotation preserves orientation. What is the geometric explanation for why the change of basis matrix, from one orthonormal basis to another, has determinant $1$?
[Math] Visualizing why rotations preserve orientation
linear algebrarotationsvisualization
Related Solutions
Others have raised some good points, and a definite answer really depends what kind of a linear transformation do we want call a rotation or a reflection.
For me a reflection (may be I should call it a simple reflection?) is a reflection with respect to a subspace of codimension 1. So in $\mathbf{R}^n$ you get these by fixing a subspace $H$ of dimension $n-1$. The reflection $s_H$ w.r.t. $H$ keeps the vectors of $H$ fixed (pointwise) and multiplies a vector perpendicular to $H$ by $-1$. If $\vec{n}\perp H$, $\vec{n}\neq0$, then $s_H$ is given by the formula
$$\vec{x}\mapsto\vec{x}-2\,\frac{\langle \vec{x},\vec{n}\rangle}{\|\vec{n}\|^2}\,\vec{n}.$$
The reflection $s_H$ has eigenvalue $1$ with multiplicity $n-1$ and eigenvalue $-1$ with multiplicity $1$ with respective eigenspaces $H$ and $\mathbf{R}\vec{n}$. Thus its determinant is $-1$. Therefore geometrically it reverses orientation (or handedness, if you prefer that term), and is not a rigid body motion in the sense that in order to apply that transformation to a rigid 3D body, you need to break it into atoms (caveat: I don't know if this is the standard definition of a rigid body motion?). It does preserve lengths and angles between vectors.
Rotations (by which I, too, mean simply an orthogonal transformations with $\det=1$) have more variation. If $A$ is a rotation matrix, then Adam's calculation proving that the lengths are preserved, tells us that the eigenvalues must have absolute value $=1$ (his calculation goes through for a complex vectors and the Hermitian inner product). Therefore the complex eigenvalues are on the unit circle and come in complex conjugate pairs. If $\lambda=e^{i\varphi}$ is a non-real eigenvalue, and $\vec{v}$ is a corresponding eigenvector (in $\mathbf{C}^n$), then the vector $\vec{v}^*$ gotten by componentwise complex conjugation is an eigenvector of $A$ belonging to eigenvalue $\lambda^*=e^{-i\varphi}$. Consider the set $V_1$ of vectors of the form $z\vec{v}+z^*\vec{v}^*$. By the eigenvalue property this set is stable under $A$: $$A(z\vec{v}+z^*\vec{v}^*)=(\lambda z)\vec{v}+(\lambda z)^*\vec{v}^*.$$ Its components are also stable under complex conjugation, so $V_1\subseteq\mathbf{R}^n$. It is obviously a 2-dimensional subspace, IOW a plane. It is easy to guess and not difficult to prove that the restriction of the transformation $A$ onto the subspace $V_1$ is a rotation by the angle $\varphi_1=\pm\varphi$. Note that we cannot determine the sign of the rotation (clockwise/ccw), because we don't have a preferred handedness on the subspace $V$.
The preservation of angles (see Adam's answer) shows that $A$ then maps the $n-2$ dimensional subspace $V^\perp$ also to itself. Furthermore, the determinant of $A$ restricted to $V_1$ is equal to one, so the same holds for $V_1^\perp$. Thus we can apply induction and keep on splitting off 2-dimensional summands $V_i,i=2,3\ldots,$ such that on each summand $A$ acts as a rotation by some angle $\varphi_i$ (usually distinct from the preceding ones). We can keep doing this until only real eigenvalues remain, and end with the situation: $$ \mathbf{R}^n=V_1\oplus V_2\oplus\cdots V_m \oplus U, $$ where the 2D-subspaces $V_i$ are orthogonal to each other, $A$ rotates a vector in $V_i$ by the angle $\varphi_i$, and $A$ restricted to $U$ has only real eigenvalues.
Counting the determinant will then show that the multiplicity of $-1$ as an eigenvalue of $A$ restricted to $U$ will always be even. As a consequence of that we can also split that eigenspace into sums of 2-dimensional planes, where $A$ acts as rotation by 180 degrees (or multiplication by $-1$). After that there remains the eigenspace belonging to eigenvalue $+1$. The multiplicity of that eigenvalue is congruent to $n$ modulo $2$, so if $n$ is odd, then $\lambda=+1$ will necessarily be an eigenvalue. This is the ultimate reason, why a rotation in 3D-space must have an axis = eigenspace belonging to eigenvalue $+1$.
From this we see:
- As Henning pointed out, we can continuously bring any rotation back to the identity mapping simply by continuously scaling all the rotation angles $\varphi_i,i=1,\ldots,m$ continuously to zero. The same can be done on those summands of $U$, where $A$ acts as rotation by 180 degrees.
- If we want to define rotation in such a way that the set of rotations contains the elementary rotations described by Henning, and also insist that the set of rotations is closed under composition, then the set must consist of all orthogonal transformations with $\det=1$. As a corollary to this rotations preserve handedness. This point is moot, if we defined a rotation by simply requiring the matrix $A$ to be orthogonal and have $\det=1$, but it does show the equivalence of two alternative definitions.
- If $A$ is an orthogonal matrix with $\det=-1$, then composing $A$ with a reflection w.r.t. to any subspace $H$ of codimension one gives a rotation in the sense of this (admittedly semi-private) definition of a rotation.
This is not a full answer in the sense that I can't give you an 'authoritative' definition of an $n$D-rotation. That is to some extent a matter of taste, and some might want to only include the simple rotations from Henning's answer that only "move" points of a 2D-subspace and keep its orthogonal complement pointwise fixed. Hopefully I managed to paint a coherent picture, though.
I too am unfamiliar with the term "orientation matrix". My guess is that this term is used to mean very different (unrelated) things in different contexts.
Also, as Gerry mentions, this is not a rotation matrix. To see this: A rotation matrix is a type of orthogonal matrix. A matrix $A$ is orthogonal if $A^TA=I_n$ (i.e. its transpose is its inverse). This implies that the rows and columns of the matrix form an orthonormal basis for $\mathbb{R}^n$. This in turn implies that each row and column is a unit vector and clearly $[1\;1\;1]^T$ is not a unit vector (its length is $\sqrt{3}$). So it's definitely not a rotation matrix.
Now what is the deal with this matrix? It's not obvious from the definition (to me anyway) what's going on, but with a slight change in perspective the purpose becomes clear.
Consider $A=(x_A,y_A)$, $B=(x_B,y_B)$, and $C=(x_C,y_C)$. Then $\vec{BA}=A-B=(x_A-x_B,y_A-y_B)$ and $\vec{BC}=C-B=(x_C-x_B,y_C-y_B)$. If we form the cross product $\vec{BA}\times\vec{BC}$ then the cross product is $\vec{0}$ if $ABC$ are colinear. The cross product points "upward" if the points are arranged clockwise and downward if the points are arranged counter-clockwise. So if we dot the cross product with the vector $\vec{k}=[0\;0\;1]$, we'll get a positive answer if clockwise and negative if counter-clockwise.
Now keep in mind dotting the cross product with the vector is the same as computing the determinant of the $3\times 3$ matrix $$ \begin{bmatrix} 0 & 0 & 1 \\ x_A-x_B & y_A-y_B & 0 \\ x_C-x_B & y_C-y_B & 0 \end{bmatrix}$$
How does this connect back to the so-called orientation matrix? Let's work with its transpose (since the matrix and its transpose have the same determinant this doesn't matter): $$\begin{bmatrix} 1 & 1 & 1 \\ x_A & x_B & x_C \\ y_A & y_B & y_C \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 1 \\ x_A-x_B & x_B & x_C \\ y_A-y_B & y_B & y_C \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & x_B & x_C-x_B \\ y_A-y_B & y_B & y_C-y_B \end{bmatrix} \sim$$ $$\begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & 0 & x_C-x_B \\ y_A-y_B & y_B & y_C-y_B \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & 0 & x_C-x_B \\ y_A-y_B & 0 & y_C-y_B \end{bmatrix}$$
So far I've just added multiples of rows and columns to other rows and columns (this does not change the determinant). Now I need to swap columns (this changes the sign of the determinant): $$\begin{bmatrix} 0 & 0 & 1 \\ x_A-x_B & x_C-x_B & 0 \\ y_A-y_B & y_C-y_B & 0 \end{bmatrix}$$
Finally we can replace the submatrix (the $2 \times 2$ matrix in the lower lefthand corner) with its transpose. This doesn't change the determinant.
In the end we have that if the determinant of the orientation matrix is positive, then my original matrix coming from the $\vec{k}$ dotted with the cross product is negative (and vice-versa).
Thus the points are arranged in a clockwise fashion if the determinant of the orientation matrix is negative and counter-clockwise if it's positive.
There's probably a more efficient route to the answer but this is what came to mind first. I hope it helps! :)
Related Question
- [Math] calculating the orientation of an object
- [Math] What exactly does a rotation preserve
- [Math] Prove the orthogonal matrix with determinant 1 is a rotation
- [Math] geometric view of similar vs congruent matrices
- When do Linear Transformations NOT preserve angles between vectors? Doesn’t the SVD tell us all linear transformations preserve angles
Best Answer
The way that I picture orientation (for a 2d object) is like this. Say you're given two noncollinear arrow vectors. Translate those two vectors so that their tails are touching. Like this:
Now consider the parallelogram that can be formed by those two vectors. Place a particle on the boundary of this parallelogram and constrain it to stay on the boundary and to traverse the boundary at some constant (and nonzero) angular rate. So if it starts at the origin (where the tails meet) it only has two choices, start moving along $A$ or start moving along $B$. These represent the two different orientations that this planar object can potentially have.
Now consider, if you were to rotate this parallelogram to some other position in $3$-space, will that affect the direction that this particle moves (if the particle were moving in what we'll call the "$A$" direction to begin with, will the particle suddenly start moving in the "$B$" direction if we rotate the entire parallelogram)?
That's how I visualize it.