The main question is how do you map the CA to reality? You need to say how you describe an experimental situation in terms of the CA variables. If the map is such that an atom is described by a local clump of automata variables, and a far-away atom is described by another local clump of automata variables far away, it is flat out impossible to reproduce quantum mechanics, even in a crude way. This type of model is thoroughly ruled out by Bell's inequality violation.

But there is no requirement that the map between atomic observables and CA variables is local. If you imagine that the CA is on the surface of a holographic screen (as t'Hooft often liked to draw), then any one atom can be described by gross properties of essentially all the CA variables, nonlocally, while another atom far away is also described by a different property of all the CA variables together, so that they are always interacting. But it is concievable that statistically, those properties of the CA that describe each atom individually look like they obeying a wavefunction time evolution.

This type of thing is very hard to rule out, at least, I don't know how you would show that this sort of thing can't reproduce quantum mechanics to the extent that it has been measured.

This is something I wonder about off and on. Is it possible, even in principle, to find a CA with a physical number of variables, on the order of the cosmological horizon area divided by the Planck area, which reproduces the observed predictions of quantum mechanics by a horrendously nonlocal identification between the properties of objects and the CA variables?

It is certainly impossible to reproduce *all* of quantum mechanics with a model of this sort. Shor's algorithm for factoring 10,000 digit numbers will certainly fail, because there aren't enough bits and operations in the CA to do the factoring. But we haven't built a quantum computer of this size yet, so that this may be seen as a safe prediction of all such models--- that quantum computers will fail at a certain not-so-enormous number of qubits.

So it is impossible to reproduce full QM, but it might be possible to reproduce a cheap QM, which matches the cheap QM we have observed to date. You must remember that every time we verify the prediction of QM, we are not in a regime where it is doing an exponential computation of large size, precisely because if it were, we wouldn't be able to compute the consequences to compare with experiment in the first place.

The nonlocality can be in space and time together. For previous answers regarding related stuff, see here: Consequences of the new theorem in QM?

## Best Answer

Here are some facts:

As others have said, the evolution of a quantum state, including entanglement, can be simulated arbitrarily well classically with sufficient resources. Actually, modelling the evolution of a quantum system is not even (believed to be) NP hard- if it was, a quantum computer could solve NP problems! That said, it does generally require exponential resources due to the exponential growth of Hilbert space.

Of course, a (classical) computer can't deterministically predict the outcome of a particular measurement, only give correct probabilities. So it is very important to distinguish between simulating deterministic quantum state evolution classically (which is no problem) and actually replacing quantum mechanics with a classical model (which can't ever happen). The difference comes at the actual measurement.

A computer can also simulate any number of impossible things. You can make a computer simulation where energy disappears, objects travel faster than light, etc etc.

When you run a Bell test on your computer program, at some level what it will do is assign the outcome of one measurement, then communicate that to the other entangled particle so that they both have correllated outcomes in the right way. In other words, the whole program relies on the two "particles", however they are stored in the computer, being close enough to communicate with each other. As a result, a classical computer could never pass a loophole free Bell inequality test. Specifically, if you load the same program onto two computers and send them far apart, they will never be able to give measurements with the same outcomes as measurements on two entangled particles would.

Notice once again that it's no problem for both the computers to know what state they're supposed to be in before you measure them. It's getting the two measurement outcomes to be properly correllated (in all measurement bases) that just isn't possible.