I think the experiment you are proposing is not possible in the way you want it.
Let us say we produce two photons in an electron-positron-annihilation with total momentum zero. (Since I don't see an easy way to produce entangled electrons I will talk about photons here, but I think it is not important for the argument). Those two photons are of course entangled in momentum: if one has momentum $\vec p$ the other one has momentum $-\vec p$.
But in order to make this statement you have to make a moemntum measurement on the initial state, i.e. know that the total momentum is zero with a certain $\Delta p$. But then, by means of the uncertainty relation, you only know the position where the photons were emitted with an uncertainty $\Delta x \propto (\Delta p)^{-1}$.
Now you can have two scenarios:
Either your double-slit is small enough and far enough away that due to the uncertainties $\Delta p$ and $\Delta x$ you do not know through which slit your photon goes. Or you still can tell (with some certainty).
In the second case there will never be an interference pattern. So no need for entanglement to destroy it.
But in the first case, due to the uncertainty $\Delta x$, measuring the position (by determining which slit your photon takes) does not give you an answer about the entangled photons position that is certain enough to tell which slit it will go through. Therefore you will see interference on both sides.
So an EPR like measurement is not possible in the experimental setup you propose.
I would assume that in general you need commuting observables, like spin and position in the Stern-Gerlach experiment, in order to measure EPR. But I didn't think that through yet.
addendum, 03-19-2014:
Forget about the second photon for a while. The first photon starts in a position state which is a Gaussian distribution around $\vec x_0$ and a momentum state which is a Gaussian around $\vec p_0$. After some time $t$ its position has evolved into a Gaussian of $\mu$ times the width around $\vec x_0 + \vec p_0 t$ (mass set equal to 1) while the momentum state is now $1/\mu$ times the width around $\vec p_0$. So while your spatial superposition gets larger - and thus better to measure with a double slit - the superposition in the momentum state, in which you have entanglement, gets smaller. You don't gain anything from entanglement, since your momentum wave-function is so narrow, that you know the momentum anyways.
It is actually not important to have space and momentum for this. Just take any non-commuting observables A and B, say with eigenstates A+, A-, B+, B-, and take two states S1 and S2 that are entangled in A. So measuring S1 in A+ implies S2 in A- and vice versa. But what you want is measure if S1 is in B+ or B- and from this conclude if S2 is in B+ or B-. And since A and B do not commute, measuring B with some certainty gives you a high uncertainty on A, meaning, for knowing if S1 is in B+ or B- you completely loose the information if it is in A+ or A-. So you cannot say anything about S2. On the other hand, as long as you are still in an eigenstate of A and know what to expect for the A measurement of S2, you don't know anything about the result of the B measurement.
So in order to do an EPR experiment you need entanglement in the observable you measure or an observable that commutes with it.
Please tell me if my thoughts are wrong.
The photon in transit between the light source and the screen is described by a wavefunction. Specifically, the wavefunction describes a photon that is delocalised i.e. it does not have a well defined position. Because the wavefunction is delocalised it encompasses both slits, which is why we say the photon goes through both slits.
Whenever you interact with this wavefunction you change it, and typically you will change it in such a way as to localise it. This is because the interaction happens at a point, and the result is to localise the photon at that point.
Now, when the photon interacts with the screen this does happen at a point, and indeed the interaction creates a spot on the photographic film/CCD/whatever at the point where the interaction happened. We can't say in advance where the interaction will occur, only that the probability of it happening is given by the wavefunction. So any one photon interacts at a point, but when we take many photons the points where they interact with the screen are distributed according to the wavefunction and together they build up the interference pattern.
However, if you interact with the photon before it has reached the slits then your interaction localises the wavefunction so the photon can no longer go through both slits. Because the wavefuntion immediately prior to the slits has changed, the effect the slits has on the wavefuntion changes as well and therefore so does the final pattern at the screen on the other side of the slits.
Best Answer
I'll pull my remarks from the (now migrated) comment thread in, since this has yet to be properly addressed. To answer your main question,
I would say:
Now, the reason for the above is that the simulation you propose just isn't very interesting. You say that
but that is not the case: anything that such a simulation could show us, we already understand.
It is well understood, since the days of von Neumann, that within formal, unitary quantum mechanics, the effect of measurements is to cause entanglement: if the system is in a superposition of $|A=1\rangle$ and $|A=2\rangle$, say, and you measure $\hat A$, with some detector that goes to $\left|\uparrow\right>$ on $A=1$ and to $\left|\downarrow\right>$ on $A=2$, what you're really generating is the superposition $$ |\Psi\rangle = \alpha \left|A=1\right>\left|\uparrow\right> + \beta \left|A=2\right>\left|\downarrow\right>, $$ i.e. an entangled state between the system and the detector. This state can no longer show interference, because if you take the inner product between the two components, i.e. if you do $$ \bigg< \left|A=1\right>\left|\uparrow\right> ,\, \left|A=2\right>\left|\downarrow\right> \bigg> = \left<A=1|A=2\right> \left<\uparrow\middle|\downarrow\right> = 0 $$ you get zero, because the detector states are orthogonal. This completely kills any chance of interference, but the wavefunction hasn't "collapsed" (yet); if you do a projective measurement on the detector (forcing its wavefunction to "collapse", whatever that means), that extends to the system as well, but that is external to the simulation.
Let me reiterate the point: anything you could simulate, while keeping to unitary quantum mechanics, would completely fit within the scheme above, and it would not add to our understanding of the thought experiment.
If you do want your simulation to include some form of wavefunction "collapse" (or decoherence or whatever you want to call it), i.e. if you want your simulation to actually say anything useful about the measurement problem, then you're going to need to decide how you handle the information encoded in your which-way detector, and this is where your scheme goes south: in order to simulate anything at all, you essentially need to pre-bake some resolution of the measurement problem into your simulation. Whatever results you get back will just be a restatement of the premises you feed in, and will be subject to all the flaws of the premises.
Given this, all that such a simulation would provide is a very expensive visualization aid for processes that can already be understood analytically, and for which the main difficulty is conceptual. The creation of visualization aids is not without merit, but this one would offer very little towards the resolution of the true conceptual problems of the measurement problem, which have much more to do with what projective measurements mean than with photons and double slits.