# [Math] Gaussian Matrix Integral

definite integralsintegrationlinear algebramatricesmeasure-theory

I need your help to solve this exercise :

Let $S$ be a symmetric Hermitian matrix $N\times N$ : $S=(s_{ij})$ with $s_{ij}=s_{ji}$.

1. When $\langle s_{ij}s_{kl}\rangle\neq 0$
2. What $$\int Tr(S^{2n})\;d\mu(s)$$ counts?
3. Calculate the moments of the matrix trace given by $$\int Tr(S^2) dS,\quad \int Tr(S^4) dS$$
where S is a real symmetric matrix satisfying $S_{ij}=S_{ji}$.

We are supposed to use The Wick Theorem to solve it.

any idea on how to calculate this, thank you.

Interesting question, you are really asking about the moments of the matrix trace for a Gaussian distribution. This gives you the resolvent operator for Wigner's semi circle distribution. Now what you want to solve in a gaussian matrix integral very common in probability/statistics/ random matrix theory $$\int Tr(H^{2p}) \exp\big[{-\frac{1}{2} H^{2}}\big] dH,\qquad p\in \mathbb{R}$$ which has a beautiful result given by the Catalan Numbers! Note, $$H_{ij}=H^\dagger_{ij},$$ or in words, H is hermitian and $H\equiv S$ for your notation sake. Also note to answer your question two, only even moments contribute to the integral which is why the exponent in the trace is given by 2p. Odd moments vanish due to Gaussian symmetry. I will calculate both $$\langle Tr(H^2)\rangle,\quad \langle Tr(H^4)\rangle,\quad \langle Tr(H^{2n})\rangle$$ where $\langle ...\rangle$ $$\langle Tr(H^{2n})\rangle = \int dH \, Tr({H^{2n}}) e^{-\frac{1}{2}H^2}$$ The integrand is invariant under unitary/orthogonal transformations depending on if H is hermitian or real symmetric. Sowe see the integral can be reduced to an integral over the eigenvalues $\lambda_1,\lambda_2,...,\lambda_N$ of $H$.

ALSO: here are many proofs of this question on the web for you (if you do not like or believe mine): https://www.math.ucdavis.edu/~mulase/texfiles/fukaya.pdf,

http://ocw.mit.edu/courses/mathematics/18-238-geometry-and-quantum-field-theory-fall-2002/lecture-notes/sec4.pdf

http://www.supelec.fr/d2ri/flexibleradio/cours/random-matrix-1.pdf (easier proof of matrix trace moments)

http://www.physik.uni-bielefeld.de/bibos/old-bibos-site/01-03-035.pdf

I will now derive the resolvent of the spectrum obtained. Recall the resolvent is defined as \begin{equation} G(z)=\frac{1}{N}\mathrm{Tr}\frac{1}{z-H} \end{equation} where H = H$^\top$. We can expand the argument in the trace in powers of 1/z by \begin{equation} \frac{1}{z-H}=\frac{1}{z}\cdot \frac{1}{1-\frac{H}{z}} \end{equation} and using the geometric series \begin{equation} \sum_{k=0}^{\infty} z^n= \frac{1}{1-z} \ \ \forall \ |\mathrm{z}| < 1, \end{equation} we can express \begin{equation} \frac{1}{z}\cdot \frac{1}{1-\frac{H}{z}}=\frac{1}{z}(1+\frac{H}{z} + \big(\frac{H}{z}\big)^2 + O(n))= \frac{1}{z}+\frac{H}{z^2} + \frac{H^2}{z^3} + \frac{1}{z}O(n). \end{equation} Putting this expression now back into G(z) we obtain \begin{equation} G(z)=\frac{1}{N\cdot z} \sum_{k=0}^{\infty} \mathrm{Tr}\big(\frac{H}{z}\big)^k, \end{equation} if we take the average of this quantity \begin{equation} \langle G(z) \rangle = \frac{1}{N\cdot z} \sum_{k=0}^\infty \langle \mathrm{Tr}\big(\frac{H}{z}\big)^k\rangle =\frac{1}{N\cdot z} \sum_{k=0}^\infty z^{-k}\langle Tr(H^k) \rangle. \end{equation} However, it is clear from the spectrum that the distribution is an even function, so the odd moments must vanish, re-writing the average resolvent as \begin{equation} \langle G(z)\rangle = \frac{1}{N} \sum_{k=0}^{\infty} z^{-(2k+1)}\langle Tr(H^{2k})\rangle. \end{equation} We can see that the average resolvent is dependent on the moments of the distribution. If we can calculate the moments (which is what you ask), the average resolvent is trivial from there.

I will now calculate the moments for the matrix trace weighted against the probability distribution for the Gaussian case \begin{equation} \langle Tr(H^{2k})\rangle=\int Tr(H^{2k})\cdot e^{-\frac{1}{2}Tr(H^2)}\mathrm{dH}, \end{equation} where the integration is over the space of N x N real symmetric matrices. The problem of calculating higher order moments arises naturally in probability theory and a common example is the higher order moments of the Gaussian distribution which are given by \begin{equation} \frac{1}{\sqrt{2 \pi }}\int_{-\infty}^{\infty}x^{2k}e^{-\frac{1}{2}x^2}dx=(2k-1)!! \end{equation} This result is easily shown by using the partial integration method. However the trace moments can be computed a number of different ways, we can use the Jacobian for gaussian random matrix models which is given by $\mathcal{J}\propto|\lambda_l-\lambda_k|^\beta$ where $\lambda$ is eigenvalues and $\beta=1,2,4$ for orthogonal, unitary, symplectic ensembles respectively. So by a change of variables to eigenvalues, you can have an integral you can attempt to solve. ${\it{However\ you\ want\ to\ use\ Wick's}}$. Calculating the second moment of the distribution \begin{equation} \langle Tr(H^{2})\rangle=\langle \sum_{i,j=1}H_{ij}H_{ji}\rangle, \qquad (\text{answer to c}) \end{equation} explicitly we can contract indices and write the average as \begin{eqnarray} H_{ij}H_{ji} =N^2, \qquad (\text{answer to c}) \end{eqnarray} This case was trivial since we only had one two free indices. For higher order moments, we need to evaluate each contraction more carefully. When contractions intersect a summation index is lost, and for large N only the non-intersecting contractions contribute. There can only be $N^V$ combinations of indices, where V is the number of free indices. So by summing over all contractions and determining the contribution of each, we can get the desired result. Using these rules we can calculate \begin{equation} \langle Tr(H^{4})\rangle=\langle H_{ij}H_{jk}\rangle \langle H_{kl}H_{li}\rangle + \langle H_{ij} H_{kl}\rangle \langle H_{jk}H_{li}\rangle + \langle H_{ij}H_{li}\rangle \langle H_{jk} H_{kl} \rangle \end{equation} where a contraction is defined by \begin{equation} \langle H_{ij}H_{kl}\rangle=\delta_{il} \delta_{jk}. \end{equation} Using this result and contracting indices we can write \begin{equation} \langle Tr(H^{4})\rangle=\sum_{i,j,k,l} (\delta_{ik}\delta_{jj}\delta_{ki}\delta_{ll}+\delta_{il}\delta_{kj}\delta_{ji}\delta_{lk}+\delta_{ii}\delta_{jl}\delta_{lj}\delta_{kk})=2N^3+N,\qquad (\text{answer to c}) \end{equation} since one of the contractions only has one free index it contributes N. The other two contractions both contribute N$^3$. The 6th moment of the distribution has 15 possibilities to contract so I will omit this derivation. In general, the nth moment is written by expanding the product as \begin{equation} \langle Tr(H^{2k})\rangle= \sum_{i_1,i_2,...i_{2k}}H_{i_1 i_2} H_{i_2 i_3}...H_{i_{2k}i_1}=\frac{1}{k+1}\binom {2k} {k}=C_k \qquad (\text{general answer to A,B,C}) \end{equation} where C$_{k}$ is the Catalan numbers which represents the k different ways to match the entries. This result can be plugged into the average resolvent expression and summed to give \begin{equation} \langle G(z)\rangle = \frac{1}{N} \sum_{k=0}^{\infty} z^{-(2k+1)}\langle Tr(H^{2k})\rangle=\frac{1}{2}\big(z-\sqrt{z^2+4}\big). \end{equation} This equality is obtained by using contractions and expanding the average resolvent for large z as done above \begin{equation} \langle G(z) \rangle=\bigg\langle \frac{1}{N} \big(z^{-1}+z^{-2}TrH+z^{-3}Tr H^2 +z^{-4} Tr H^3 + O(N)\big )\bigg \rangle. \end{equation} Since the odd moments vanish due to symmetry, the even moments are calculated as shown above. We obtain \begin{equation} \langle G(z) \rangle= z^{-1}+z^{-1}\langle G(z) \rangle ^2 \end{equation} which can be solved for G(z) by using the quadratic equation \begin{equation} \langle G(z) \rangle =\frac{1}{2}\big(z-\sqrt{z^2+4}\big) \end{equation} which is the expression above. Asymptotically at large z, $G(z) \sim \frac{1}{z}$. This result can be plugged into $$\begin{equation} \langle \rho(\lambda)\rangle = - \lim\limits_{\epsilon \rightarrow 0} \frac{N}{\pi}\Im{\langle G(\lambda +i \epsilon) }\rangle. \end{equation}$$ to calculate the spectral density. We can do this by writing the average resolvent in terms of eigenvalues and use imaginary increments. However the resolvent has a branch cut structure in the complex plane since it behaves as $1/z$ for large z.

This cut structure is a simple pole and this is why we use imaginary increments. We need to write the average resolvent for each sides of the pole, so \begin{eqnarray} \langle G(\lambda +i\epsilon)\rangle= \frac{1}{2}\big(\lambda+i\epsilon -\sqrt{(\lambda+i\epsilon)^2-4}\big)\\ \langle G(\lambda -i\epsilon)\rangle= \frac{1}{2}\big(\lambda-i\epsilon -\sqrt{(\lambda-i\epsilon)^2-4}\big). \end{eqnarray} We can now invert this relation to obtain the spectral density \begin{equation} \rho(\lambda)= \frac{N}{2\pi}\lim\limits_{\epsilon \rightarrow 0}\Im \big[ \sqrt{(\lambda+i\epsilon)^2-4}-(-\sqrt{(\lambda-i\epsilon)^2-4}) \big] \end{equation} which can be simplified to yield \begin{equation} \rho(\lambda)=\frac{N}{\pi}\sqrt{\lambda^2-4} \end{equation} which is Wigner's semi-circle.

EDIT: since you again asked me about how to calculate average quantities (they are in the post above), he is an alternative method for you. (answer to a)

Formally we can calculate the correlation function by using \begin{equation} \big<x_i x_j\big>=\frac{ \int_{-\infty}^{\infty} \prod_{i=1}^N dx_i \, x_i x_j \exp\big(-\frac{1}{2} \sum_{i,j=1}^{N} x_i H_{ij}x_j\big) }{ \int_{-\infty}^{\infty} \prod_{i=1}^N dx_i \exp\big(-\frac{1}{2} \sum_{i,j=1}^{N} x_i H_{ij}x_j\big) }. \end{equation} First, I will give a general proof for the expectation value of \begin{equation} \big<x_i f(x)\big>=C^{-1}\int_{-\infty}^\infty d^Nx\, x_i f(x) \exp\big( -\frac{1}{2} \sum_{j,k=1}^{N} x_j H_{jk} x_k \big). \end{equation} We can now use the identity $$x_i \exp\big( -\frac{1}{2} \sum_{j,k=1}^{N} x_j H_{jk} x_k \big)=-\sum_l H^{-1}_{il}\frac{\partial}{\partial x_l} \exp\big( -\frac{1}{2} \sum_{j,k=1}^{N} x_j H_{jk} x_k, \big)$$ inserting this into the equation above and using integration by parts we obtain $$\big<x_i f(x)\big>= C^{-1} \sum_l H^{-1}_{il} \int_{-\infty}^\infty d^Nx \exp\big( -\frac{1}{2} \sum_{j,k=1}^{N} x_j H_{jk} x_k \big) \frac{\partial f}{\partial x_l}.$$ We can write this more compactly as \begin{equation} \big<x_i f(x)\big>=\sum_l \big<x_i x_l\big> \bigg<\frac{\partial f}{\partial x_l}\bigg>. \end{equation}