Solved – Confused about Cholesky and eigen decomposition

cholesky decompositioneigenvaluesmatrix decomposition

I'm looking to generate correlated random variables. I have a symmetric, positive definite matrix. So I know that you can use the Cholesky decomposition, however I keep being told that this only works for Gaussian random variables?! Is that true?

Furthermore how does this compare to Eigen decomposition. For example using Cholesky decomposition we can write a random parameter as:

$x = \bar{x} + Lz$

where $L$ is the Cholesky decomposition (lower/upper triangular matrix) and $z$ is some vector of random variables. So one can sample the $z$'s and build up a pdf of x. Now we could also use Eigen decomposition and write x as:

$x = \bar{x} + U\lambda^{1\over2}z$

where $\lambda$ is a diagonal matrix of eigenvalues and $U$ is a matrix composed of the eigenvalues. So we could also build a pdf of this. But if we equate these $x$'s we find that $L = U\lambda^{1\over2}$ But this isn’t true as $L$ is triangular and $U\lambda^{1\over2}$ is not?! So I'm really, really confused. So to clarify the questions:

1) For Cholesky decomposition does the vector z have to be only Gaussian?
2) How does the eigenvalue compare with the Cholesky decomposition? They are clearly different factorisation techniques. So I don't see how the $x$'s above can be equivalent?

Thanks, as always, guys.

Best Answer

1) Pretty much yes. The reason is that the $x_i$'s are going to end up being a linear combination of the $z_i$'s. That works out nicely for Gaussian deviates because any linear combination of Gaussian deviates is, itself, a Gaussian deviate. Unfortunately, this is not necessarily true of other distributions.

2) It's a little puzzling, I know, but they are equivalent. Let $\Sigma$ be your covariance matrix and suppose you have both the Cholesky factorization, $\Sigma=L L^T$ and the eigendecomposition, $\Sigma=U \lambda U^T$. The covariance of $L z$ is given by: $$ \begin{array}{} E[L z (L z)^T] & = & E[L z z^T L^T] \\ & = & L \ E[z z^T] \ L^T \\ & = & L \ I \ L^T \\ & = & L L^T \\ & = & \Sigma \end{array} $$ Similarly, the covariance of $U \lambda^\frac{1}{2} z$ is given by: $$ \begin{array}{} E[U \lambda^\frac{1}{2} z (U \lambda^\frac{1}{2} z)^T] & = & E[U \lambda^\frac{1}{2} z z^T \lambda^\frac{1}{2} U^T] \\ & = & U \lambda^\frac{1}{2} \ E[z z^T] \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \ I \ \lambda^\frac{1}{2} U^T \\ & = & U \lambda^\frac{1}{2} \lambda^\frac{1}{2} U^T \\ & = & U \lambda U^T \\ & = & \Sigma \end{array} $$ For purposes of computation, I suggest you stick with the Cholesky factorization unless your covariance matrix is ill-conditioned/nearly singular/has a high condition number. Then it's probably best to switch to the eigendecomposition.