To answer your second question first: an orthogonal matrix $O$ satisfies $O^TO=I$, so $\det(O^TO)=(\det O)^2=1$, and hence $\det O = \pm 1$. The determinant of a matrix tells you by what factor the (signed) volume of a parallelipiped is multipled when you apply the matrix to its edges; therefore hitting a volume in $\mathbb{R}^n$ with an orthogonal matrix either leaves the volume unchanged (so it is a rotation) or multiplies it by $-1$ (so it is a reflection).
To answer your first question: the action of a matrix $A$ can be neatly expressed via its singular value decomposition, $A=U\Lambda V^T$, where $U$, $V$ are orthogonal matrices and $\Lambda$ is a matrix with non-negative values along the diagonal (nb. this makes sense even if $A$ is not square!) The values on the diagonal of $\Lambda$ are called the singular values of $A$, and if $A$ is square and symmetric they will be the absolute values of the eigenvalues.
The way to think about this is that the action of $A$ is first to rotate/reflect to a new basis, then scale along the directions of your new (intermediate) basis, before a final rotation/reflection.
With this in mind, notice that $A^T=V\Lambda^T U^T$, so the action of $A^T$ is to perform the inverse of the final rotation, then scale the new shape along the canonical unit directions, and then apply the inverse of the original rotation.
Furthermore, when $A$ is symmetric, $A=A^T\implies V\Lambda^T U^T = U\Lambda V^T \implies U = V $, therefore the action of a symmetric matrix can be regarded as a rotation to a new basis, then scaling in this new basis, and finally rotating back to the first basis.
Others have raised some good points, and a definite answer really depends what kind of a linear transformation do we want call a rotation or a reflection.
For me a reflection (may be I should call it a simple reflection?) is a reflection with respect to a subspace of codimension 1. So in $\mathbf{R}^n$ you get these by fixing a subspace $H$ of dimension $n-1$. The reflection $s_H$ w.r.t. $H$ keeps the vectors of $H$ fixed (pointwise) and multiplies a vector perpendicular to $H$ by $-1$. If $\vec{n}\perp H$, $\vec{n}\neq0$, then $s_H$ is given by the formula
$$\vec{x}\mapsto\vec{x}-2\,\frac{\langle \vec{x},\vec{n}\rangle}{\|\vec{n}\|^2}\,\vec{n}.$$
The reflection $s_H$ has eigenvalue $1$ with multiplicity $n-1$ and eigenvalue $-1$ with multiplicity $1$ with respective eigenspaces $H$ and $\mathbf{R}\vec{n}$. Thus its determinant is $-1$. Therefore geometrically it reverses orientation (or handedness, if you prefer that term), and is not a rigid body motion in the sense that in order to apply that transformation to a rigid 3D body, you need to break it into atoms (caveat: I don't know if this is the standard definition of a rigid body motion?). It does preserve lengths and angles between vectors.
Rotations (by which I, too, mean simply an orthogonal transformations with $\det=1$) have more variation. If $A$ is a rotation matrix, then Adam's calculation proving that the lengths are preserved, tells us that the eigenvalues must have absolute value $=1$ (his calculation goes through for a complex vectors and the Hermitian inner product). Therefore the complex eigenvalues are on the unit circle and come in complex conjugate pairs. If $\lambda=e^{i\varphi}$ is a non-real eigenvalue, and $\vec{v}$ is a corresponding eigenvector (in $\mathbf{C}^n$), then the vector $\vec{v}^*$ gotten by componentwise complex conjugation is an eigenvector of $A$ belonging to eigenvalue $\lambda^*=e^{-i\varphi}$.
Consider the set $V_1$ of vectors of the form $z\vec{v}+z^*\vec{v}^*$. By the eigenvalue property this set is stable under $A$:
$$A(z\vec{v}+z^*\vec{v}^*)=(\lambda z)\vec{v}+(\lambda z)^*\vec{v}^*.$$ Its components are also stable under complex conjugation, so $V_1\subseteq\mathbf{R}^n$. It is obviously a 2-dimensional subspace, IOW a plane. It is easy to guess and not difficult to prove that the restriction of the transformation $A$ onto the subspace $V_1$ is a rotation by the angle $\varphi_1=\pm\varphi$. Note that we cannot determine the sign of the rotation (clockwise/ccw), because we don't have a preferred handedness on the subspace $V$.
The preservation of angles (see Adam's answer) shows that $A$ then maps the $n-2$ dimensional subspace $V^\perp$ also to itself. Furthermore, the determinant of $A$ restricted to $V_1$ is equal to one, so the same holds for $V_1^\perp$. Thus we can apply induction and keep on splitting off 2-dimensional summands $V_i,i=2,3\ldots,$ such that on each summand $A$ acts as a rotation by some angle $\varphi_i$ (usually distinct from the preceding ones). We can keep doing this until only real eigenvalues remain, and end with the situation:
$$
\mathbf{R}^n=V_1\oplus V_2\oplus\cdots V_m \oplus U,
$$
where the 2D-subspaces $V_i$ are orthogonal to each other, $A$ rotates a vector in $V_i$ by the angle $\varphi_i$, and $A$ restricted to $U$ has only real eigenvalues.
Counting the determinant will then show that the multiplicity of $-1$ as an eigenvalue of $A$ restricted to $U$ will always be even. As a consequence of that we can also split that eigenspace into sums of 2-dimensional planes, where $A$ acts as rotation by 180 degrees (or multiplication by $-1$). After that there remains the eigenspace belonging to eigenvalue $+1$. The multiplicity of that eigenvalue is congruent to $n$ modulo $2$, so if $n$ is odd, then $\lambda=+1$ will necessarily be an eigenvalue. This is the ultimate reason, why a rotation in 3D-space must have an axis = eigenspace belonging to eigenvalue $+1$.
From this we see:
- As Henning pointed out, we can continuously bring any rotation back to the identity mapping simply by continuously scaling all the rotation angles $\varphi_i,i=1,\ldots,m$ continuously to zero. The same can be done on those summands of $U$, where $A$ acts as rotation by 180 degrees.
- If we want to define rotation in such a way that the set of rotations contains the elementary rotations described by Henning, and also insist that the set of rotations is closed under composition, then the set must consist of all orthogonal transformations with $\det=1$. As a corollary to this rotations preserve handedness. This point is moot, if we defined a rotation by simply requiring the matrix $A$ to be orthogonal and have $\det=1$, but it does show the equivalence of two alternative definitions.
- If $A$ is an orthogonal matrix with $\det=-1$, then composing $A$ with a reflection w.r.t. to any subspace $H$ of codimension one gives a rotation in the sense of this (admittedly semi-private) definition of a rotation.
This is not a full answer in the sense that I can't give you an 'authoritative' definition of an $n$D-rotation. That is to some extent a matter of taste, and some might want to only include the simple rotations from Henning's answer that only "move" points of a 2D-subspace and keep its orthogonal complement pointwise fixed. Hopefully I managed to paint a coherent picture, though.
Best Answer
Let $V$ be a finite-dimensional complex inner product space.
By the finite-dimensional spectral theorem, one has that $T \in L(V) := L(V,V)$ is normal if and only if there exists an orthonormal basis for $V$ consisting of eigenvectors of $T$. Geometrically, this means that $T$ acts on individual each coordinate axis (with respect to this orthonormal basis) by rescaling by a complex scaling factor (i.e., the eigenvalue for the eigenvector spanning that axis). In other words, $T$ is normal if and only if there exists some orthonormal coordinate system for $V$ such that $T$ fixes each coordinate axis, i.e., for each coordinate axis $\mathbb{C} v_k$, $T(\mathbb{C} v_k) \subseteq \mathbb{C} v_k$.
A $1$-parameter groups of unitaries is a map $U : \mathbb{R} \to U(V)$, the group of unitaries in $L(V)$, such that $U(0) = I$ and $U(t_1+t_2) = U(t_1)U(t_2)$ for any $t_1$, $t_2$; geometrically, given a fixed orthonormal basis defining your initial orthonormal coordinate system, you can view $U$ as defining a a time-dependent orthonormal coordinate system on $V$.
So, let $U : \mathbb{R} \to U(V)$ be a $1$-parameter group of unitaries. Observe that for $t \in \mathbb{R}$ and $h \neq 0$, $$ \frac{1}{h}(U(t+h)-U(t)) = \left(\frac{1}{h}\left(U(h)-I\right)\right)U(t), $$ so that $U$ is differentiable at $t$ if and only if it is differentiable at $0$. So, suppose that $U$ is differentiable at $0$. Then $$ (U^\prime(0))^\ast = \left(\lim_{h \to 0}\frac{1}{h}\left(U(h)-I)\right)\right)^\ast = \lim_{h\to 0}\frac{1}{h}\left(U(-h)-I\right) = -\lim_{k\to 0} \frac{1}{k} \left(U(k)-I\right) = -U^\prime(0), $$ so that $U(t)$ satisfies the ODE $$ U^\prime(t) = A U(t), $$ where $A := U^\prime(0)$ is skew-adjoint. Thus, from a geometric standpoint, skew-adjoint operators are precisely the infinitesimal generators of differentiable time-dependent orthonormal coordinate systems.
In fact, bearing in mind that $T$ is skew-adjoint if and only if $iT$ is self-adjoint (Hermitian), and that in mathematical physics and theoretical physics one generally uses $e^{iA}$ instead of $e^A$, this is essentially why self-adjoint operators (on infinite-dimensional Hilbert space!) become essential in quantum mechanics. This is because in quantum mechanics, the time evolution (in the so-called Heisenberg picture) is effected by a $1$-parameter group of unitaries whose infinitesimal generator is the all-important Hamiltonian of your quantum mechanical system.
Now, suppose that $V$ is a finite-dimensional real inner product space.
By the finite-dimensional spectral theorem one has that $T \in L(V)$ is symmetric (i.e., self-adjoint) if and only if there exists an orthonormal basis for $V$ consisting of eigenvectors. In other words, $T$ is symmetric if and only if there exists some orthonormal coordinate system for $V$ such that $T$ fixes each coordinate axis, i.e., for each coordinate axis $\mathbb{R}v_k$, $T(\mathbb{R}v_k) \subseteq \mathbb{R} v_k$. Observe that the notion of normal operator is no longer nearly as useful as in the complex case, since the finite-dimensional spectral theorem now characterizes precisely the symmetric operators. In particular, even though orthogonal operators are normal, examples of orthogonal operators with non-real complex eigenvalues are plentiful.
When it comes to skew-symmetric (i.e., skew-adjoint) operators, everything carries over exactly as before, except that we now find that they are precisely the infinitesimal generators of $1$-parameter groups of orthogonal operators, viz, time-dependent orthonormal coordinate systems. Note, however, that skew-symmetric and symmetric operators are now fundamentally different now, since we can no longer multiply by $i$ to go between the skew-adjoints and self-adjoint, as we could in the complex case. In particular, symmetric operators necessarily have real eigenvalues, whilst examples of skew-symmetric operators with non-real complex eigenvalues are absolutely plentiful.