[Math] Positive-Definite Functions and Fourier Transforms

ca.classical-analysis-and-odesfa.functional-analysisfourier analysispr.probability

Bochner's theorem states that a positive definite function is the Fourier transform of a finite Borel measure. As well, an easy converse of this is that a Fourier transform must be positive definite.

My question is: is there a high-brow explanation for why positive definiteness and Fourier transforms go hand-in-hand?

As I understand it, positive definiteness imposes wonderfully strong regularity conditions on the function. We immediately deduce that the function is bounded above at its value at 0, that it is non-negative at 0 and that continuity at 0 implies continuity everywhere.

A leading example I have in mind comes from probability. One can show (Levy's Theorem) that a sum of iid rv converges weakly to some probability distribution by considering the product of characteristic functions and showing that its tail converges to 1 around an interval containing 0, so by positive definiteness and by the identity $1-\mbox{Re} \phi(2t) \leq 4(1-\mbox{Re} \phi(t))$ this implies convergence to a degenerate distribution. It just seems rather mysterious to me how this kind of local regularity becomes global.

Edit:

To be a little more specific, I understand that the Radon Nikodym derivative is positive and $e^{ix}$ is positive definite. I am more interested in consequences of positive-definiteness on the regularity of the function. For example, if one takes the 2×2 positive definite matrix associated with the function and considers its determinant, it follows that $|f(x)|\leq |f(0)|$. If I take the 3×3 positive definite matrix, I can conclude that if $f$ is continuous at 0, it is then continuous everywhere. My issue is that these types of arguments give me no intuition at all as to what positive definiteness is.

Let me thus add an additional question: what is it about positive definiteness that adds such regularity conditions?

Best Answer

Perhaps the phenomenon you are asking about is: why is the definition of a positive-definite function natural?

One answer is that positive-definite functions are exactly coefficients of group representations, in the following sense. If $\pi : \mathbb{R}\to U(H)$ is a unitary representation of $\mathbb{R}$ on some Hilbert space $H$, and $h\in H$ is a vector, then the function $$t\mapsto \langle \pi (t) h, h\rangle$$ is positive-definite. Conversely, given a positive-definite function $\phi$, there exists a Hilbert space $H$, a vector $h\in H$ and a unitary representation $\pi$ of $\mathbb{R}$ on $H$, for which $\phi(t)=\langle \pi(t)h,h\rangle$.

Indeed, the $n\times n$ matrix occurring in the definition of a positive definite function is nothing more than the Gramm matrix of inner products $\langle \pi (t_i) h, \pi (t_j) h\rangle$; and positivity of this matrix is just a reflection of the fact that the inner product of $H$, restricted to the linear span of $\pi(t_i)h: i=1,\dots,n$ is positive-definite.

The Fourier transform goes from the functions on the group to functions on the space of irreducible unitary representations of the group, and thus switches positivity and complete positivity.

Related Question