[Physics] In ‘t Hooft beable models, do measurements keep states classical

determinismquantum mechanics

This is a questions on 't Hooft's beable models (see here: Discreteness and Determinism in Superstrings?) for quantum mechanics, and the goal is to understand to what extent these succeed in reproducing quantum mechanics. To be precise, I will say an "'t Hooft beable model" consists of the following:

  • A very large classical cellular automaton, whose states form a basis of a Hilbert space.
  • A state which is imagined to be one of these basis elements.
  • A unitary quantum time evolution operator which, for a series of discrete times, reproduces the cellular automaton evolution rules.

't Hooft's main argument (which is interesting and true) is that it is possible to reexpress many quantum systems in this form. The question is whether this rewriting automatically then allows you to consider the quantum system as classical.

The classical probabilisitic theory of a cellular automaton necessarily consists of data which is a probability distribution $\rho$ on CA states evolving according to two separate rules:

  • Time evolution: $\rho'(B') = \rho(B)$, where prime means "next time step" and B is the automaton state. You can extend this to a probabilistic diffusion process without difficulty.
  • Probabilistic reduction: if a bit of information becomes available to an observer through an experiment, the CA states are reduced to those compatible with the observation.

I should define probabilistic reduction—it's Bayes' rule: given an observation that we see produces a result $x$, but we don't know the exact value $x$, we know a the probability $p(x)$ that the result is $x$, the probabilistic reduction is

$$ \rho'(B) = C \rho(B) p(x(B)), $$

where $x(B)$ is the value of $x$ which would be produced if the automaton state is $B$, and $C$ is a normalization constant. This process is the reason that classical probability theory is distinguished over and above any other system—one can always interpret the Bayes' reduction process as reducing ignorance of hidden variables.

The bits of information that become available to a macroscopic observer internal to the CA through experiment are not microscopic CA values, but horrendously nonlocal and horrendously complex functions of gigantic chunks of the CA. Under certain circumstances, the probabilistic reduction plus the measurement process could conceivably approximately mimic quantum mechanics, I don't see a proof otherwise. But the devil is in the details.

In 't Hooft models, you also have two processes:

  • Time evolution: $\psi \rightarrow U \psi$.
  • Measurement reduction: the measurement of an observable corresponding to some subsystem at intermediate times, which, as in standard quantum mechanics, reduces the wavefunction by a projection.

The first process, time evolution, is guaranteed to keep you not superposed in the global variables, since this is just a permutation in 't Hooft's formulation, that's the whole point. But I have seen no convincing argument that the second process, learning a bit of information through quantum measurement, corresponds to learning something about the classical state and reducing the CA probabilistic state according to Bayes' rule.

Since 't Hooft's models are completely precise and calculable (this is the great virtue of his formulation), this can be asked precisely: is the reduction of the wavefunction in response to learning a bit of information about the CA state through an internal observation always mathematically equivalent to a Bayes reduction of the global wavefunction?

I will point out that if the answer is no, the 't Hooft models are not doing classical automata, they are doing quantum mechanics in a different basis. If the answer is yes, then the 't Hooft models could be completely rewritable as proper activities on the probability distribution $\rho$, rather than on quantum superposition states.

Best Answer

I think the correct answer is that such models are both quantum mechanical and classical, although this could be considered as a question of semantics.

It is a fact that, as soon as you found a basis in your quantum system where the evolution is just a permutation, the "quantum probabilities" for the states in this basis, (as defined by Born's rule) become identical to the classical probabilities (indeed obeying Bayes' logic). Therefore it will be difficult to avoid interpreting them as such: the "universe" is in one of these states, we don't know which, but we know the probabilities.

The question is well posed: will it still be meaningful to consider superimposed states in this basis, and ask whether these can be measured, and how these evolve?

My answer depends on whether the quantum system in question is sufficiently structured to allow for considering "macroscopic features" in the "classical limit", and whether this classical limit allows for non-trivial interactions, causing phenomena as complex as "decoherence".

Then take a world described by this model and consider macroscopic objects in this world. The question is then whether inside these macroscopic objects (planets, people, indicators in measurement devices,...), our CA behaves differently from what they do in the vacuum. This may be reasonable to assume, and I do assume this to be true in the real world, but it is far from obvious. If it is so, then the macroscopic events are described by the CA alone.

This then would be my idea of a hidden variable theory. Macroscopic features are defined to be features that can be recognised by looking at collective modes of the CA. They are classical. Note that, if described by wave functions, these wave functions will have collapsed automatically. Physicists in this world may have been unable to identify the CA states, but they did reach the physical scale where CA states no longer behave collectively but where, instead, single bits of information matter. These physicists will have been able to derive the Schroedinger equation for the states they need to understand their world, but they work in the wrong basis, so that they land in heated discussions about how to interpret these states...

Note added: I thought my answer was clear, but let me summarise by answering the last 2 paragraphs of the question:

YES, my models are always equivalent to a "Bayes reduction of the global wave function"; if you can calculate how the probability distribution $\rho$ evolves, you are done.

But alas, you would need a classical computer with Planckian proportions to do that, because the CA is a universal computer. So if you want to know how the distributions behave at space and time scales much larger than the Planck scales, the only thing you can do is do the mapping onto a quantum Hilbert space. QM is a substitute, a trick. But it works, and at macroscopic scales, it's all you got.