First, the difference in the eigenvectors. Let $(\lambda,v)$ be an eigenpair of $A$, i.e., $A v = \lambda v$ and let $\alpha \in \mathbb{C} \setminus \{0\}$. Then
$$A (\alpha v) = \alpha A v = \alpha \lambda v = \lambda (\alpha v).$$
So, $v$ is an eigenvector of $A$ if and only if $\alpha v$ is an eigenvector of $A$. Both are equally "good", unless you desire some additional properties. Note that this works for any $A$, not just $A = C$.
Second, the significance of the left singular vectors is in computing the eigenvalue decomposition in $XX^T$ (in your notation: $X^T = X'$).
Third, a real diagonal matrix is orthogonal if and only if each of its diagonal elements is either $1$ or $-1$. Let us prove this.
Let $D = \mathop{\rm diag}(d_1,\dots,d_n)$. Obviously, $D = D^T$, so
$$D^TD = \mathop{\rm diag}(d_1^2,\dots,d_n^2).$$
So, $D^TD = {\rm I}$ if and only if $d_k^2 = 1$ for all $k$.
For complex matrices (and using complex adjoint $Q^*$ instead of transpose $Q^T$), we get that $|d_k| = 1$ for all $k$.
Eigenvalues and eigenvectors lead naturally to the spectral theorem, diagonalization of matrices, and Jordan normal form. This is all indeed very elegant and nice but there are several problems.
First, there is the practical one. How do you actually solve a large system of linear equations on a computer and how do you actually diagonalize a large matrix (that theory tells you is diagonalizable) on a computer. These are very difficult problems since there is a huge gap between the theoretical results and actual computations. That gap is caused of course by rounding errors on a computer. Loads of books are written on the subject as, needless to say, it's of immense importance. Many factorizations of matrices (e.g., LU and QR) are meant to address such issues. To make computations more robust and more efficient.
As for motivating SVD, nothing can be easier. Consider the following questions regarding a linear transformation $T:V\to W$ where $V,W$ are inner product spaces (say over $\mathbb R$) not necessarily of the same dimension (so there is no point of speaking of eigenvalues or Jordan form or any of that):
1) What is the shape of the image under $T$ of the unit sphere in $V$.
2) If $T$ is not invertible, how do you invert it in the best way you can.
3) How do you solve $Tx=b$ when no solution exists.
4) How do you replace $T$ by another transformation of smaller rank (a useful question for data analysis and compression).
the list goes on, but the answer to all of these questions is:SVD.
Best Answer
Given a vector norm $|\cdot|$ on $\mathbb{C}^n$ one defines the induced matrix norm as the largest value that $|Ax|$ can attain for a vector $x$ of norm $1$. Equivalenetly, one can define it as the largest value that $|Ax|/|x|$ can attain over all non-zero vectors (this is quite clearly the same).
Intuitively, this makes sense, since it measures by how much the matrix can change the size of a vector. Of course, one still needs to check this actually gives a norm.
This norm then has the nice property that $|Ay| \le ||A|| \ |y|$ for a matrix $A$ and a vector $y$, and also that $||AB|| \le ||A|| \ ||B||$ for two matrices $A,B$.
One can do this definition for any vector norm, there it is done with respect to the $2$-norm.
Another advantage is that this definition generalizes quite directly to linear operators on Banach spaces.