[Physics] Gaussian integral formula for matrix product

functional-determinantsintegrationpath-integral

I am looking for a way to prove that
$$ \det (M \cdot N) = \det(M)\det(N) \tag{0}$$
Where $M$ and $N$ are matrices with continuous indices, so that $\det$ is a functional determinant. A way to show that $(0)$ is wrong would also be welcomed.

This question is about the following formula,
$$
\int\text{d}\vec{x} \exp(- \sum_{ij}x^i A_{ij}x^j) = \left (\det A_{ij}\right )^{-1/2}\left (2\pi\right )^{D/2}. \tag{1}
$$

Now, we would like this identity to be compatible with,
$$
\int\text{d}\vec{x} \exp(- \sum_{ijk}x^i A_{ik}B_{kj}x^j) = \left (\det A\cdot B\right )^{-1/2}\left (2\pi\right )^{D/2} = \left (\det A\right )^{-1/2}\left (\det B\right )^{-1/2}\left (2\pi\right )^{D/2}.\tag{2}
$$

Any idea how to prove this? I am interested, eventually, in the generalisation of this formula to path integrals, namely, given the path integral
$$
\int\mathcal{D}\phi \exp\left[- \int\text{d}x\text{d}y \phi(x)M(x,y)\phi(y)\right] =C \left (\det M\right )^{-1/2}, \tag{3}
$$

where now $\det M$ is a functional determinant, i ask the question whether it makes sense to write the generalised formula,
$$\begin{align}
\int\mathcal{D}\phi \exp\left[- \int\text{d}x\text{d}y \text{d}z\phi(x)M(x,y)N(y,z)\phi(z)\right] =& \left (\det M\cdot N\right )^{-1/2}\cr =& \left (\det M\right )^{-1/2} \left (\det N\right )^{-1/2}.\end{align} \tag{4}
$$

[UPDATE]: I might have an answer now: let us just consider,
$$\det M\cdot N = \prod_i \lambda_i[M\cdot N],\tag{5}$$
where $\lambda_i[M\cdot N]$ are the the eigenvalues of the matrix $M\cdot N$. This formula is valid even for continuous matrices, such as the laplacian operator $\partial^2 \delta(x-y)$.
If the commutator $[M,N] = 0$, then the two matrices can be diagonalised in the same basis, and $\lambda_i[M\cdot N] = \lambda_i[M]\lambda_i[N]$, with no sum over $i$. Then formula (4) can be proven at least in the simple case in which the commutator vanishes.
A trivial example of this is for $M = A$ and $N = A^{-1}$, for any invertible matrix $A$, which leads to $\det A\cdot A^{-1}=1$. Also, in case $M\cdot M^T = f(x) \delta(x-y)$, this would imply that
$$\det M\cdot M^T = (\det M)^2 = \det f(x) \delta(x-y) = \prod_x f(x)\tag{6}$$
and so on. These seem trivial cases, but since we are talking of functional determinants they constitute a powerful computational tool.
How much do you agree with this attempt of a solution? It is not very formal, but i don't see where it could go wrong.

Best Answer

The statement seems to be wrong even for an infinite number of discrete indices.

Consider for example the vector space of square integrable functions on the positive integers, i.e. sequences $\{f_1,f_2,\cdots\}$ s.t. $\sum_{i>0} |f_i|^2 < \infty$, and consider the shift operator $S:f \mapsto Sf$, where

$Sf = \{ f_2,f_3,\cdots \} \ . $

Consider furthermore the operator $S^\dagger: f \mapsto S^\dagger f$ with

$S^\dagger f = \{ 0, f_1,f_2, \cdots \} \ . $

Now

$SS^\dagger f = \{f_1,f_2,\cdots \} = f \ \ \ \text{ but } \ \ \ S^\dagger S f = \{ 0, f_2,f_3,\dots \} \ . $

That is, $SS^\dagger$ has all eigenvalues $1$, while $S^\dagger S$ has one eigenvalue $0$ and all other eigenvalues $1$. Hence

$ \det( SS^\dagger) = 1 \ \ \ \ \ \text{while} \ \ \ \ \det (S^\dagger S ) = 0 \ .$

Now if it were true that $\det(M N) = \det(M) \det(N)$, than we would have proven that $1 = 0$.

Related Question