[Physics] Why is Gleason’s Theorem not enough to obtain Born Rule in Many Worlds Interpretation

born-rulequantum mechanicsquantum-interpretations

The Many Worlds interpretation suffer from at least 2 "wounds", the preferred basis issue and perhaps the most notorious probability issue.

How do you make sense of probability in a model where everything happens?

There are all these elaborate attempts at deriving Born Rule by Wallace et al. and at least 20 different people at arXiv.
And there are some very good papers that provide criticism for these attempts, but my question is why isn't Gleason's Theorem enough?

Best Answer

The immediate problem with obtaining the Born rule in the many-worlds interpretation is quite elementary: you can't even begin to attach probabilities to "worlds" (or to events within worlds), in your theory of many worlds, if the theory isn't even clear on what a world is.

Physical states according to various interpretations

In classical physics, a physical state is a configuration of particles and/or fields.

In quantum physics according to the Copenhagen interpretation, a "quantum state" (for the present discussion, let's say it's a vector in a Hilbert space) is an abstract, second-order "state" which provides probabilities regarding the actual physical state. The actual physical state is like a classical physical state (configuration of particles and/or fields) except that the uncertainty principle prevents a complete specification.

In quantum physics according to the many-worlds interpretation, the quantum state is the physical state. But then we need to understand how the physical reality we observe relates to this quantum state. It should be a particular "part" of the quantum state, with other, similar parts being the other worlds parallel to our own.

However, there is no consensus among many-worlds advocates on how to answer that question. One might suppose that superposition has something to do with the answer, because it is about putting two quantum states together to get a combined quantum state. But given a quantum state, there is no unique decomposition of it into a set of superposed states. There are infinitely many sets of basis vectors available, and even if we restrict ourselves to states which are eigenstates of classical observables like position or momentum, you still have the choice forced on you by the uncertainty principle.

What is an Everett world?

If you look into the many-worlds literature, formal and informal, you will find people advocating the position basis, people advocating a basis determined by decoherence, and people saying that all bases are equally valid. There was at one time a hope that Gell-Mann and Hartle's consistent histories formalism would lead to the discovery of a unique basis that is quasiclassical and maximally fine-grained, but I don't see people talking about that any more.

Conversations with ordinary physicists who believe in many worlds have left me with the impression that most of them don't have a logically coherent concept of what an Everett world is. The worldview seems to be operationally the same as Copenhagen - use state vectors to obtain probabilities - but this is then overlaid with a belief that "the wavefunction exists", and some vague significance attached to decoherence.

If you think imprecisely like that, you are in danger of never even noticing the real problems that a many-worlds interpretation faces. Einstein once described the Copenhagen interpretation as a "tranquilizing philosophy", and it seems that this informal version of many-worlds, in which one goes on using quantum mechanics exactly as before, but one now proclaims that the quantum state is reality, similarly provides many contemporary physicists with mental peace, without actually providing answers.

An example of a many-worlds theory which does specify exactly what the worlds are

So we can't even begin to have this discussion unless we settle on a particular version of many-worlds; and some versions of many-worlds are just logically incoherent - for example, one according to which the "splitting into worlds" is observer-dependent. You the observer are supposed to be inhabiting just one world of many, so this would make your own individual existence "observer-dependent". A lot of the prose written about many-worlds eventually lapses into incoherence, by talking about observer-dependent observers, worlds that differ in their degree of realness, and other conceptual misadventures - though the authors of these concepts no doubt regard them as daring insights that need to be accepted or contemplated.

Ideas like that can't be analyzed in the way you would normally evaluate a proposition about physical reality, e.g. by checking it against the evidence. All you can do is try to bring out the conceptual incoherence and make it obvious, which is a thankless task. So I won't further try to address that sort of many-world theory. Instead, for the purposes of discussion, I will focus on Julian Barbour's "Platonia" theory.

Barbour is at least very clear about what he thinks exists. He is a quantum cosmologist, and he proposes that what exists are all possible spatial configurations of the universe. He calls them "time capsules": time is not real, nothing is actually happening anywhere, but some of these static configurations contain what looks like evidence of a past - memories or other physical traces.

The theory is therefore quite crazy - he's saying that time isn't real, that despite appearances one moment does not flow into another. It also has the feature that it doesn't ontologically satisfy special relativity - for that you have to have space-time, and here you only have space. This is a problem that will plague many attempts to be precise about what the Everett worlds are. Copenhagen quantum mechanics is relativistic because reality is events in space-time, a change of coordinate systems is just a relabeling of events, and state vectors are just calculating devices. But the many-worlds interpretation reifies state vectors (it stipulates that they are "elements of reality"), and it's really hard to see how you can do that without also reifying the reference frame in which they are defined.

However, your question was about the Born rule, and not relativity, so let us leave these other problems and return to Barbour's theory. Barbour interprets the wavefunction of the universe by saying that the various configurations making up "configuration space" are what's real, and the Born rule supplies the "measure" which tells us how to "count" them. Normally we would say it's a probability measure, but here, by hypothesis, all these worlds are equally real, so perhaps we should say it's a "reality measure".

Even though we have here arrived at a precise statement from Barbour about what it is that exists (at the level of "worlds"), there are still formidable problems in making sense of it (beyond the already stated problems, the problem of time being unreal, and the problem of relativity not applying ontologically). It seems that, in order to explain the observation of Born-rule frequencies in reality, we have to regard the measure on configuration space as a prior (in the Bayesian sense), which we can then combine with intra-world relative frequencies in order to obtain conditional probabilities for the outcome of experiments. That is, if physical occurrence A is accompanied by physical occurrence B1 3/4 of the time, and by the alternative B2 just 1/4 of the time, that is because the combined measure of the (A & B1) worlds is three times the combined measure of the (A & B2) worlds.

But it seems a little strange to be using a nonuniform measure at all. When you do calculus, you start with a uniform measure like Lebesgue measure, and then the "nonuniformity" of the integral comes about because the function you're integrating over is not constant. Here we are asked to introduce the nonuniformity at the level of the measure itself. This is mathematically possible, but does it make sense as a statement about reality? In my opinion, the sensible interpretation of a nonuniform measure in a multiverse theory (insofar as one can ever be "sensible" about such matters) is that it means that the worlds are duplicated, in proportion to the deviation from uniformity. The true measure will be the natural, uniform one, and the Born frequencies have to come about from the duplication of worlds.

So what about Gleason's Theorem?

So far I haven't said a thing about Gleason's Theorem. But I consider it essential to first spell out what a real discussion of a many-worlds ontology would look like. Either your theory has to say exactly what the worlds are, so we can then have the discussion about how the Born rule could work in that model, or we are stuck in the mystical realm of hugging the wavefunction and loving its many-in-oneness. Hopefully it's obvious why Gleason's Theorem is not enough to obtain the Born rule in the latter type of many-worlds interpretation: there isn't actually a theory there. But the resistance to taking the other path is immense, because all this ugliness like having a preferred basis and even an ontologically preferred reference frame tends to appear. Perhaps it's a point in favor of the physical intuition of "mystical" many-worlders that they don't want to take that path - they sense the ugliness of the consequences - but remaining content with a studiously vague concept of Everett world is a point against their intellectual rigor.

As for the ontological implications of Gleason's Theorem - whether for a genuinely rigorous many-worlds theory, or for some other interpretation of quantum mechanics - I'm really not sure. It seems hard to escape the conclusion that a many-worlds theory in which the worlds are defined has something like a preferred basis. In that case, applying the Born rule is certainly consistent with the theorem (though there would still remain the question of what a nonuniform measure on the worlds means ontologically - are the worlds duplicated? are the actual worlds just an appropriately weighted discrete sampling from a continuum of possible worlds?).

But it would be a somewhat trivial consistency, because of the preferred basis. The interesting thing about Gleason's measure is that it is defined for subspaces in a basis-independent way. This is one reason why it's appealing for mystical many-worlders who don't want to have an ontologically preferred basis; it seems to promise a perspective in which the quantum state is primary, and a division into individual worlds is just a matter of perspective. But this leads to the paradox of observer-dependent observers, or the problem of oneself being something less than absolutely real.

I note that Gleason's theorem has played a small role in the reception accorded to a completely different interpretation, Bohmian mechanics. Gleason's theorem was at one time taken as a proof of the impossibility of hidden variables, but John Bell pointed out that it's only inconsistent with noncontextual hidden-variable theories, in which all observables simultaneously have sharp values. Bohmian mechanics is a contextual theory in which position has a preferred status, and in which other observables take on their measured values because of the measurement interaction. This runs against the belief in ontological equality of all observables; but perhaps reflecting on the status of Gleason's theorem within the Bohmian ontology will tell us something about its meaning for the real world.

Related Question