If you look at Powers of diagonal matrices, it would be the reciprocal of the square root of each term in the diagonal.
Lets do an example for a diagonal matrix:
$$A=\begin{bmatrix}2 & 0 \\ 0 & 3\end{bmatrix}$$
$$A^{-1/2} = \begin{bmatrix} \frac{1}{\sqrt{2}} & 0 \\ 0 & \frac{1}{\sqrt{3}}\end{bmatrix} = \frac{1}{6}\begin{bmatrix}3 \sqrt{2} & 0 \\ 0 & 2 \sqrt{3}\end{bmatrix}$$
Clear?
They both seem to work quite well. They give slightly different results for the estimated covariance matrices of the generated series, but I wouldn't be surprised if it's due to rounding errors somewhere in the computations.
Below is some R code which generates samples from $N(0, \boldsymbol{\Sigma})$.
n <- 10000000
X <- cbind(rnorm(n), rnorm(n))
sigma <- t(matrix(c(0.666, -0.333, -0.333, 0.666), nrow=2))
spectral <- eigen(sigma)
X.spectral <- t(spectral$vectors %*% sqrt(diag(spectral$values)) %*% t(X))
X.cholesky <- t(t(chol(sigma)) %*% t(X))
cov(X.spectral)
cov(X.cholesky)
So with my 10,000,000 samples, the covariance matrices are
> cov(X.spectral)
[,1] [,2]
[1,] 0.6660626 -0.3331138
[2,] -0.3331138 0.6658130
> cov(X.cholesky)
[,1] [,2]
[1,] 0.6660344 -0.3328923
[2,] -0.3328923 0.6656198
Best Answer
You can obtain the square root of a matrix M using the Cholesky Decomposition, M = LL'. Then compute the inverse of L.