[Math] Quantum field theory in Solovay-land

lo.logicmp.mathematical-physics

Constructing quantum field theories is a well-known problem. In Euclidean space, you want to define a certain measure on the space of distributions on R^n. The trickiness is that the detailed properties of the distributions that you get is sensitive to the dimension of the theory and the precise form of the action.

In classical mathematics, measures are hard to define, because one has to worry about somebody well-ordering your space of distributions, or finding a Hamel basis for it, or some other AC idiocy. I want to sidestep these issues, because they are stupid, they are annoying, and they are irrelevant.

Physicists know how to define these measures algorithmically in many cases, so that there is a computer program which will generate a random distribution with the right probability to be a pick from the measure (were it well defined for mathematicians). I find it galling that there is a construction which can be carried out on a computer, which will asymptotically converge to a uniquely defined random object, which then defines a random-picking notion of measure which is good enough to compute any correlation function or any other property of the measure, but which is not sufficient by itself to define a measure within the field of mathematics, only because of infantile Axiom of Choice absurdities.

So is the following physics construction mathematically rigorous?

Question: Given a randomized algorithm P which with certainty generates a distribution $\rho$, does P define a measure on any space of distributions which includes all possible outputs with certain probability?

This is a no-brainer in the Solovay universe, where every subset S of the unit interval [0,1] has a well defined Lebesgue measure. Given a randomized computation in Solovay-land which will produce an element of some arbitrary set U with certainty, there is the associated map from the infinite sequence of random bits, which can be thought of as a random element of [0,1], into U, and one can then define the measure of any subset S of U to be the Lebesgue measure of the inverse image of S under this map. Any randomized algorithm which converges to a unique element of U defines a measure on U.

Question: Is it trivial to de-Solovay this construction? Is there is a standard way of converting an arbitrary convergent random computation into a measure, that doesn't involve a detour into logic or forcing?

The same procedure should work for any random algorithm, or for any map, random or not.

EDIT: (in response to Andreas Blass) The question is how to translate the theorems one can prove when every subset of U gets an induced measure into the same theorems in standard set theory. You get stuck precisely in showing that the set of measurable subsets of U is sufficiently rich (even though we know from Solovay's construction that they might as well be assumed to be everything!)

The most boring standard example is the free scalar fields in a periodic box with all side length L. To generate a random field configuration, you pick every Fourier mode $\phi(k_1,…k_n)$ as a Gaussian with inverse variance $k^2/L^d$, then take the Fourier transform to define a distribution on the box. This defines a distribution, since the convolution with any smooth test function gives a sum in Fourier space which is convergent with certain probability. So in Solovay land, we are free to conclude that it defines a measure on the space of all distributions dual to smooth test functions.

But the random free field is constructed in recent papers of Sheffield and coworkers by a much more laborious route, using the exact same idea, but with a serious detour into functional analysis to show that the measure exists (see for instance theorem 2.3 in http://arxiv.org/PS_cache/math/pdf/0312/0312099v3.pdf). This kind of thing drives me up the wall, because in a Solovay universe, there is nothing to do— the maps defined are automatically measurable. I want to know if there is a meta-theorem which guarantees that Sheffield stuff had to come out right without any work, just by knowing that the Solovay world is consistent.

In other words, is the construction: pick a random Gaussian free field by choosing each Fourier component as a random gaussian of appropriate width and fourier transforming considered a rigorous construction of measure without any further rigamarole?

EDIT IN RESPONSE TO COMMENTS: I realize that I did not specify what is required from a measure to define a quantum field theory, but this is well known in mathematical physics, and also explicitly spelled out in Sheffield's paper. I realize now that it was never clearly stated in the question I asked (and I apologize to Andreas Blass and others who made thoughtful comments below).

For a measure to define a quantum field theory (or a statistical field theory), you have to be able to compute reasonably arbitrary correlation functions over the space of random distributions. These correlation functions are averages of certain real valued functions on a randomly chosen distribution— not necessarily polynomials, but for the usual examples, they always are. By "reasonably arbitrary" I actually mean "any real valued function except for some specially constructed axiom of choice nonsense counterexample". I don't know what these distribtions look like a-priory, so honestly, I don't know how to say anything at all about them. You only know what distributions you get out after you define the measure, generate some samples, and seeing what properties they have.

But in Solovay-land (a universe where every subset S of [0,1] is forced to have Lebesgue measure equal to the probability that a randomly chosen real number happens to be an element of S) you don't have to know anything. The moment you have a randomized algorithm that converges to an element of some set of distributions U, you can immediately define a measure, and the expectation value of any real valued function on U is equal to the integral of this function over U against that measure. This works for any function and any distribution space, without any topology or Borel Sets, without knowing anything at all, because there are no measurability issues— all the subsets of [0,1] are measurable. Then once you have the measure, you can prove that the distributions are continuous functions, or have this or that singularity structure, or whatever, just by studying different correlation functions. For Sheffield, the goal was to show that the level sets of the distributions are well defined and given by a particular SLE in 2d, but whatever. I am not hung up on 2d, or SLE.

If one were to suggest that this is the proper way to do field theory, and by "one" I mean "me", then one would get laughed out of town. So one must make sure that there isn't some simple way to de-Solovay such a construction for a general picking algorithm. This is my question.

EDIT (in response to a comment by Qiaochu Yuan): In my view, operator algebras are not a good substitute for measure theory for defining general Euclidean quantum fields. For Euclidean fields, statistical fields really, you are interested any question one can ask about typical picks from a statistical distribution, for example "What is the SLE structure of the level sets in 2d"(Sheffield's problem), "What is the structure of the discontinuity set"? "Which nonlinear functions of a given smeared-by-a-test-function-field are certainly bounded?" etc, etc. The answer to all these questions (probably even just the solution to all the moment problems) contains all the interesting information in the measure, so if you have some non-measure substitute, you should be able to reconstruct the measure from it, and vice-versa. Why hide the measure? The only reason would be to prevent someone from bring up set-theoretic AC constructions.

For the quantities which can be computed by a stochastic computation, it is traditional to ignore all issues of measurability. This is completely justified in a Solovay universe where there are no issues of measurability. I think that any reluctance to use the language of measure theory is due solely to the old paradoxes.

Best Answer

I don't know anything about the space of all distributions dual to smooth test functions, but do know a fair bit about computable measure theory (from a certain perspective).

First, you mention that you have a computable algorithm which generates a probability distribution. I believe you are saying that you have a computable algorithm from $[0,1]$ (or technically the space of infinite binary sequences) to some set $U$ where $U$ is the space of distributions of some type.

Say your map is $f$. How are you describing the element $f(x) \in U$? In computable analysis, there is a standard way to talk about these things. We can describe each element of $U$ with an infinite code (although each element has more than one code). Then $f$ works as follows: It reads the bits of $x$; from those bits, it starts to write out the code for the $f(x)$. The more bits of $x$ known, the more bits of the code for $f(x)$ known.

(Note, not every space has such a nice encoding. If the space isn't separable, there isn't a good way to describe each object while still preserving the important properties, namely the topology. Is say, in your example above, the space of distributions that are dual to smooth test functions, is it a separable space--maybe in a weak topology? Does the encoding you use for elements of $U$ generate the same topology?)

The important property of such a computable map is that it must be continuous (in the topology generated by the encoding, but these usually coincide with the topology of the space). Since $f$ is continuous, we know we can induce a Borel measure on $U$ as follows. If $S$ is an open set then $f^{-1}(S)$ is open and $\mu(f^{-1}(S))$ is known. Similarly, with any Borel sets, hence you have a Borel measure.

Borel measures are sufficient for most applications I can think of (you can integrate continuous functions and from them, define and integrate the L^p functions), but once again, I don't know anything about your applications.

Also, if the function $f$ doesn't always converge to a point in $U$, but only does so almost everywhere, the function $f$ is not continuous, but it is still fairly nice and I believe stuff can be said about the measure, although I need to think about it.

Update: If $f$ converges with probability one, then the set of input points that $f$ converges on is a measure one $G_{\delta}$ set, in particular it is Borel. The function remains continuous on that domain (in the restricted topology). Hence there is still an induced Borel measure on the target space. (Take a Borel set; map it back. It is Borel on the restricted domain, and hence Borel on [0,1]).

Update: Also, I am assuming that your algorithm directly computes the output from the input. I will give an example what I mean. Say one want to compute a real number. To compute it directly, I should be able to ask the algorithm to give me that number within $n$ decimal places with an error bound of $1/10^n$. An indirect algorithm works as follows: The computer just gives me a sequence of approximations that converge to the number. The computer may say $0,0,0,...$ so I think it converges to 0, but at some point it starts to change to $1,1,1,...$. I can never be sure if my approximation is close to the final answer. Even if your algorithm is of the indirect type, it doesn't matter for your applications. It will still generate a Borel map, albeit a more complex one than continuous, and hence it will generate a Borel measure on the target space. (The almost everywhere concerns are similar; they also go up in complexity, but are still Borel.) Without knowing more about your application it is difficult for me to say much specific to your case.

Am I correct in my understanding of your construction, especially the computable side of it? For example, is this the way you describe the computable map from $[0,1]$ to $U$?

On a more general note, much of measure theory has been developed in a set theoretic framework. This isn't very helpful with computable concerns. But using various other definitions of measures, one is able to once again talk about measure theory with an eye to what can and cannot be computed.

I hope this helps, and that I didn't just trivialize your question.

Related Question