First, the difference in the eigenvectors. Let $(\lambda,v)$ be an eigenpair of $A$, i.e., $A v = \lambda v$ and let $\alpha \in \mathbb{C} \setminus \{0\}$. Then
$$A (\alpha v) = \alpha A v = \alpha \lambda v = \lambda (\alpha v).$$
So, $v$ is an eigenvector of $A$ if and only if $\alpha v$ is an eigenvector of $A$. Both are equally "good", unless you desire some additional properties. Note that this works for any $A$, not just $A = C$.
Second, the significance of the left singular vectors is in computing the eigenvalue decomposition in $XX^T$ (in your notation: $X^T = X'$).
Third, a real diagonal matrix is orthogonal if and only if each of its diagonal elements is either $1$ or $-1$. Let us prove this.
Let $D = \mathop{\rm diag}(d_1,\dots,d_n)$. Obviously, $D = D^T$, so
$$D^TD = \mathop{\rm diag}(d_1^2,\dots,d_n^2).$$
So, $D^TD = {\rm I}$ if and only if $d_k^2 = 1$ for all $k$.
For complex matrices (and using complex adjoint $Q^*$ instead of transpose $Q^T$), we get that $|d_k| = 1$ for all $k$.
They both seem to work quite well. They give slightly different results for the estimated covariance matrices of the generated series, but I wouldn't be surprised if it's due to rounding errors somewhere in the computations.
Below is some R code which generates samples from $N(0, \boldsymbol{\Sigma})$.
n <- 10000000
X <- cbind(rnorm(n), rnorm(n))
sigma <- t(matrix(c(0.666, -0.333, -0.333, 0.666), nrow=2))
spectral <- eigen(sigma)
X.spectral <- t(spectral$vectors %*% sqrt(diag(spectral$values)) %*% t(X))
X.cholesky <- t(t(chol(sigma)) %*% t(X))
cov(X.spectral)
cov(X.cholesky)
So with my 10,000,000 samples, the covariance matrices are
> cov(X.spectral)
[,1] [,2]
[1,] 0.6660626 -0.3331138
[2,] -0.3331138 0.6658130
> cov(X.cholesky)
[,1] [,2]
[1,] 0.6660344 -0.3328923
[2,] -0.3328923 0.6656198
Best Answer
They are different in the sense that you can't get from $A^T\Sigma' A$ to $A \Sigma' A^T$ (i.e. using the same matrix, $A$, in both forms) but they are the same in the sense that if $B = A^T$ then $A^T\Sigma' A = B \Sigma' B^T$.
In general, if $\Sigma$ is a symmetric matrix, then we can choose a matrix $A$ consisting of eigenvectors of $\Sigma$ (as columns) such that $A^{-1} = A^T$ (i.e. $AA^T = I$). Then to diagonalize $\Sigma$, we can write $\Sigma = A \Sigma' A^T$, where $\Sigma'$ is diagonal.
Alternatively, if we take $B = A^T$ then we have $B\Sigma B^T = \Sigma'$ (swapping the roles of $\Sigma$ and $\Sigma'$) and so $\Sigma = B^T \Sigma' B$ instead. Here the matrix $B$ has eigenvectors of $\Sigma$ as rows.