[Physics] Why do people categorically dismiss some simple quantum models

determinismmodelsquantum mechanics

Deterministic models. Clarification of the question:

The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics.

My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows:

Did any of these people actually read the work and can anyone tell me where a mistake was made?

Now the details. I can't help being disgusted by the "many world" interpretation, or the Bohm-de Broglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication. These models are of course "wrong" in the sense that they do not describe the real world, they do not generate the Standard Model, but one can imagine starting from such simple models and adding more and more complicated details to make them look more realistic, in various stages.

Of course I know what the difficulties are when one tries to underpin QM with determinism. Simple probabilistic theories fail in an essential way. One or several of the usual assumptions made in such a deterministic theory will probably have to be abandoned; I am fully aware of that. On the other hand, our world seems to be extremely logical and natural.

Therefore, I decided to start my investigation at the other end. Make assumptions that later surely will have to be amended; make some simple models, compare these with what we know about the real world, and then modify the assumptions any way we like.

The no-go theorems tell us that a simple cellular automaton model is not likely to work. One way I tried to "amend" them, was to introduce information loss. At first sight this would carry me even further away from QM, but if you look a little more closely, you find that one still can introduce a Hilbert space, but it becomes much smaller and it may become holographic, which is something we may actually want. If you then realize that information loss makes any mapping from the deterministic model to QM states fundamentally non-local—while the physics itself stays local—then maybe the idea becomes more attractive.

Now the problem with this is that again one makes too big assumptions, and the math is quite complicated and unattractive.
So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong? With the idea in mind that we will alter the assumptions, maybe add information loss, put in an expanding universe, but all that comes later; first I want to know what goes wrong.

And here is the surprise: In a sense, nothing goes wrong. All you have to assume is that we use quantum states, even if the evolution laws themselves are deterministic. So the probability distributions are given by quantum amplitudes. The point is that, when describing the mapping between the deterministic system and the quantum system, there is a lot of freedom. If you look at any one periodic mode of the deterministic system, you can define a common contribution to the energy for all states in this mode, and this introduces a large number of arbitrary constants, so we are given much freedom.

Using this freedom I end up with quite a few models that I happen to find interesting. Starting with deterministic systems I end up with quantum systems. I mean real quantum systems, not any of those ugly concoctions. On the other hand, they are still a long way off from the Standard Model, or even anything else that shows decent, interacting particles.

Except string theory. Is the model I constructed a counterexample, showing that what everyone tells me about fundamental QM being incompatible with determinism, is wrong? No, I don't believe that. The idea was that, somewhere, I will have to modify my assumptions, but maybe the usual assumptions made in the no-go theorems will have to be looked at as well.

I personally think people are too quick in rejecting "superdeterminism". I do reject "conspiracy", but that might not be the same thing. Superdeterminism simply states that you can't "change your mind" (about which component of a spin to measure), by "free will", without also having a modification of the deterministic modes of your world in the distant past. It's obviously true in a deterministic world, and maybe this is an essential fact that has to be taken into account. It does not imply "conspiracy".

Does someone have a good, or better, idea about this approach, without name-calling? Why are some of you so strongly opinionated that it is "wrong"? Am I stepping on someone's religeous feelings? I hope not.

References:

"Relating the quantum mechanics of discrete systems to standard canonical quantum mechanics", arXiv:1204.4926 [quant-ph];

"Duality between a deterministic cellular automaton and a bosonic quantum field theory in $1+1$ dimensions", arXiv:1205.4107 [quant-ph];

"Discreteness and Determinism in Superstrings", arXiv:1207.3612 [hep-th].


Further reactions on the answers given. (Writing this as "comment" failed, then writing this as "answer" generated objections. I'll try to erase the "answer" that I should not have put there…)

First: thank you for the elaborate answers.

I realise that my question raises philosophical issues; these are interesting and important, but not my main concern. I want to know why I find no technical problem while constructing my model. I am flattered by the impression that my theories were so "easy" to construct. Indeed, I made my presentation as transparent as possible, but it wasn't easy. There are many dead alleys, and not all models work equally well. For instance, the harmonic oscillator can be mapped onto a simple periodic automaton, but then one does hit upon technicalities: The hamiltonian of a periodic system seems to be unbounded above and below, while the harmonic oscillator has a ground state. The time-reversible cellular automaton (CA) that consists of two steps $A$ and $B$, where both $A$ and $B$ can be written as the exponent of physically reasonable Hamiltonians, itself is much more difficult to express as a Hamiltonian theory, because the BCH series does not converge. Also, explicit $3+1$ dimensional QFT models resisted my attempts to rewrite them as cellular automata. This is why I was surprised that the superstring works so nicely, it seems, but even here, to achieve this, quite a few tricks had to be invented.

@RonMaimon. I here repeat what I said in a comment, just because there the 600 character limit distorted my text too much. You gave a good exposition of the problem in earlier contributions: in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states $\psi_1,\ \psi_2,\ …$ that have the property $\langle\psi_i\,|\,\psi_j\rangle=\delta_{ij}$, whereas the quantum world that we would like to describe, allows for many more states that are not at all orthonormal to each other. How could these states ever arise? I summarise, with apologies for the repetition:

  • We usually think that Hilbert space is separable, that is, inside every infinitesimal volume element of this world there is a Hilbert space, and the entire Hilbert space is the product of all these.
  • Normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe.
  • I think this might not be true. The ontic states of the universe may form a much smaller class of states $\psi_i$; in terms of CA states, they must form an orthonormal set. In terms of "Standard Model" (SM) states, this orthonormal set is not separable, and this is why, locally, we think we have not only the basis elements but also all superpositions.
    The orthonormal set is then easy to map back onto the CA states.

I don't think we have to talk about a non-denumerable number of states, but the number of CA states is extremely large. In short: the mathematical system allows us to choose: take all CA states, then the orthonormal set is large enough to describe all possible universes, or choose the much smaller set of SM states, then you also need many superimposed states to describe the universe. The transition from one description to the other is natural and smooth in the mathematical sense.

I suspect that, this way, one can see how a description that is not quantum mechanical at the CA level (admitting only "classical" probabilities), can "gradually" force us into accepting quantum amplitudes when turning to larger distance scales, and limiting ourselves to much lower energy levels only. You see, in words, all of this might sound crooky and vague, but in my models I think I am forced to think this way, simply by looking at the expressions: In terms of the SM states, I could easily decide to accept all quantum amplitudes, but when turning to the CA basis, I discover that superpositions are superfluous; they can be replaced by classical probabilities without changing any of the physics, because in the CA, the phase factors in the superpositions will never become observable.

@Ron I understand that what you are trying to do is something else. It is not clear to me whether you want to interpret $\delta\rho$ as a wave function. (I am not worried about the absence of $\mathrm{i}$, as long as the minus sign is allowed.) My theory is much more direct; I use the original "quantum" description with only conventional wave functions and conventional probabilities.


(New since Sunday Aug. 20, 2012)

There is a problem with my argument. (I correct some statements I had put here earlier). I have to work with two kinds of states: 1: the template states, used whever you do quantum mechanics, these allow for any kinds of superposition; and 2: the ontic states, the set of states that form the basis of the CA. The ontic states $|n\rangle$ are all orthonormal: $\langle n|m\rangle=\delta_{nm}$, so no superpositions are allowed for them (unless you want to construct a template state of course). One can then ask the question: How can it be that we (think we) see superimposed states in experiments? Aren't experiments only seeing ontic states?

My answer has always been: Who cares about that problem? Just use the rules of QM. Use the templates to do any calculation you like, compute your state $|\psi\rangle$, and then note that the CA probabilities, $\rho_n=|\langle n|\psi\rangle|^2$, evolve exactly as probabilities are supposed to do.

That works, but it leaves the question unanswered, and for some reason, my friends on this discussion page get upset by that.

So I started thinking about it. I concluded that the template states can be used to describe the ontic states, but this means that, somewhere along the line, they have to be reduced to an orthonormal set. How does this happen? In particular, how can it be that experiments strongly suggest that superpositions play extremely important roles, while according to my theory, somehow, these are plutoed by saying that they aren't ontic?

Looking at the math expressions, I now tend to think that orthonormality is restored by "superdeterminism", combined with vacuum fluctuations. The thing we call vacuum state, $|\emptyset\rangle$, is not an ontological state, but a superposition of many, perhaps all, CA states. The phases can be chosen to be anything, but it makes sense to choose them to be $+1$ for the vacuum. This is actually a nice way to define phases: all other phases you might introduce for non-vacuum states now have a definite meaning.

The states we normally consider in an experiment are usually orthogonal to the vacuum. If we say that we can do experiments with two states, $A$ and $B$, that are not orthonormal to each other, this means that these are template states; it is easy to construct such states and to calculate how they evolve. However, it is safe to assume that, actually, the ontological states $|n\rangle$ with non-vanishing inner product with $A$, must be different from the states $|m\rangle$ that occur in $B$, so that, in spite of the template, $\langle A|B\rangle=0$. This is because the universe never repeats itself exactly. My physical interpretation of this is "superdeterminism": If, in an EPR or Bell experiment, Alice (or Bob) changes her (his) mind about what to measure, she (he) works with states $m$ which all differ from all states $n$ used previously. In the template states, all one has to do is assume at least one change in one of the physical states somewhere else in the universe. The contradiction then disappears.

The role of vacuum fluctuations is also unavoidable when considering the decay of an unstable particle.

I think there's no problem with the above arguments, but some people find it difficult to accept that the working of their minds may have any effect at all on vacuum fluctuations, or the converse, that vacuum fluctuations might affect their minds. The "free will" of an observer is at risk; people won't like that.

But most disturbingly, this argument would imply that what my friends have been teaching at Harvard and other places, for many decades as we are told, is actually incorrect. I want to stay modest; I find this disturbing.

A revised version of my latest paper was now sent to the arXiv (will probably be available from Monday or Tuesday). Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them.

Best Answer

I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however.

Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of the following.

  • There is a classical polynomial-time algorithm for factoring and other problems which can be solved on a quantum computer.
  • The deterministic underpinnings of quantum mechanics require $2^n$ resources for a system of size $O(n)$.
  • Quantum computation doesn't actually work in practice.

None of these seem at all likely to me. For the first, it is quite conceivable that there is a polynomial-time algorithm for factoring, but quantum computation can solve lots of similar periodicity problems, and you can argue that there can't be a single algorithm that solves all of them on a classical computer, so you would have to have different classical algorithms for each classical problem that a quantum computer can solve by period finding.

For the second, deterministic underpinnings of quantum mechanics that require $2^n$ resources for a system of size $O(n)$ are really unsatisfactory (but maybe quite possible ... after all, the theory that the universe is a simulation on a classical computer falls in this class of theories, and while truly unsatisfactory, can't be ruled out by this argument).

For the third, I haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results.

Related Question