I once watched an online seminar by Robert Spekkens, who said something along the lines that no-go theorems are interesting, because they put constraints on what an epistemic interpretation of quantum mechanics can look like. Any no-go theorem makes a certain set of assumptions, and if the theorem is correct, we know that we must avoid at least one of those assumptions if we're to make a successful theory.
The Pusey-Barrett-Rudolph paper spells out some of its assumptions. (They do it most explicitly in the concluding paragraphs.) There may well be additional unmentioned assumptions (e.g. causality), but the ones they specifically mention are:
There is an objective physical state $\lambda$ for any quantum system
There exists some $\lambda'$ that can be shared between some pair of distinct quantum states $\psi_1$ and $\psi_2$. That is, $p(\lambda=\lambda' | \psi=\psi_1)$ and $p(\lambda=\lambda' | \psi=\psi_2)$ are both non-zero. (This is what it means for an interpretation to be epistemic, according to Spekkens' definition.)
The outcomes of measurements depend only on $\lambda$ and the settings of the measurement apparatus (though there can be stochasticity as well)
Spatially separated systems prepared independently have separate and independent $\lambda$'s.
It is from these that they derive a contradiction. Any theory that fails to make all of these same assumptions is unaffected by their result.
I'm fairly sure that under the standard formulation of Bohmian mechanics violates the second one, i.e. Bohmian mechanics is not an epistemic theory in the sense of Spekkens. This is because in Bohmian mechanics the physical state $\lambda$ consists of both the particle's real position and the "quantum potential", and the latter is in a one-to-one relationship with the quantum state.
If I understand correctly, you're suggesting that you could instead think of the physical state, $\lambda$, as consisting only of the positions and velocities of the particles, with the non-local potential being considered part of the equations of motion instead. But in this case, the third assumption above is violated, because you still need to know the potential, in addition to $\lambda$, in order to predict the outcomes of measurements. Since this formation would violate the assumptions of Pusey, Barrett and Rudolph's argument, their result would not apply to it.
In your updated question, you clarify that your suggestion is to fix the wavefunction to a particular value. In that case it's surely true that Bohmian mechanics reduces to particles moving according to deterministic but non-local equations of motion. But then you have only a partial model of quantum mechanics, because you can no longer say anything about what happens if you change the wavefunction. My strong suspicion is that if you take this approach, you will end up with an epistemic model, but it will be a model of only a restricted subset of quantum mechanics, and this will result in Pusey et al.'s result not being applicable.
Is quantum mechanics on a measurement level a deterministic theory or a probability theory?
Probability theory. Evidence: when physicists do quantum measurements they find the results of individual runs are unpredictable. Only frequencies of multiple runs are predictable and match the theoretical results of quantum mechanics.
How can this possibly be consistent with unitarity as described above?
During a quantum measurement (measuring a system S by an apparatus A) the complete system S+A viewed at the microscopic level undergoes unitary evolution. During that evolution the system S become entangled with the apparatus A. However, by experimental design, this entanglement when viewed as a macroscopic approximation is seen to have some simplifying features:
a. The apparatus is in a mixed state of pointer states
b. The possible eigenvectors of some observable of S have coupled to the pointer states
c. Off-diagonal "interference" terms have become suppressed by decoherence due to the many internal degrees of freedom of A.
Owing to the special nature of these pointer states of A (from OP "some many-particle systems may well be approximated as classical and can store the information of measurement outcomes") we now have an objective fact about our universe.
Only one of the pointer states has in fact actually occurred in our universe (we can make this statement whether on not a physicist actually reads the pointer and discovers which universe we are actually in).
We can then make the inference that for this particular run of the S+A interaction, the system S in fact belongs to the subensemble giving rise to the occurance of this pointer state. We can make this reduction of the original ensemble based on this objective information about our universe. Restricted to this subensemble, we still have unitary evolution when viewed at the exact microscopic level.
Disclaimer: I don't know whether this really makes any sense, but this is what the reference referred to by OP seems to be saying.
Follow up question: so can we say QM is a probability theory for practical purposes but deterministic in principle?
No I think not. Here is the confusion: having banished the need for explicit wave function collapse from the QM formalism it seems that all we are left with is deterministic unitary evolution of the wavefunction of our closed system. Hence surely QM is deterministic. But no. The indeterminism in the outcome of measurements is still present in the wavefunction.
In fact the QM formalism tells us precisely when it is able to be deterministic and when not: it is deterministic whenever the quantum state is an eigenvector of the operator related to the measurement in question. Remarkably from this one postulate it is possible to derive that quantum mechanics is probabilistic (i.e. we can derive the Born Rule).
Explicitly, we can show that it is deterministic that if the evolution of S+A is run $N$ times (with $N \rightarrow \infty$) then the frequencies of different results will follow precisely the Born rule probabilities. However for a single run there is no such determinism. For a single run it is only determined that there will be an outcome.
This approach to QM is described by Arkani-Hamed here.
Edit
For a more advanced discussion of these ideas I recommend Is the statistical interpretation of Quantum Mechanics dead?
Best Answer
You seem to be rather confused regarding those terms. The terms $\psi$-epistemic and $\psi$-ontic are mutually exclusive when describing an interpretation of quantum mechanics.
Both of these terms are possible characteristics for an ontological model: a description of the set $\Lambda$ of possible "complete descriptions of reality", which are often denoted by $\lambda$ and termed ontic states.
In other words, an ontological model is a descriptions of the things that "exist" in the real world. On that stage, there are two types: $\psi$-ontic models, where the wavefunction "exists", and $\psi$-epistemic models, where it doesn't.
More concretely:
In $\psi$-ontic models, the wavefunction is a physical property of the "real" state of the world. That is, if I were to obtain a complete description of reality for the state of the system, then I can deduce the wavefunction $\psi$ from this state. Graphically, such models look like this:
Note, however, that this does not fully rule out interpretations of $\psi$ as a statistical quantity: it can still be a distribution over a set of real states, with each wavefunction corresponding to a distinct set of states.
As a subset of these models, if the real state of the system turns out to be fully informationally equivalent to the wavefunction, the model is called $\psi$-complete.
In such a model, if I know the wavefunction, then I know all that there is to know about the system. This rules out e.g. hidden variables
In $\psi$-epistemic models, the wavefunction is not a physical property, but rather a statistical quantity and really just a description of our state of knowledge about the system. More concretely, a model is called $\psi$-epistemic if it allows the existence of two different wavefunctions that are consistent with the same "real" state of the system.
In particular, this means that you cannot deduce the wavefunction from the ontic state of the world.
In terms of how you phrased it in the question,
that is correct but not quite there. In $\psi$-epistemic models QM isn't a complete description of reality, but that is also the case in $\psi$-ontic models that are not $\psi$-complete.
For more details, see the paper that (to my knowledge) introduced these terms with precise definitions:
Mathematica source for the graphics: Import["http://goo.gl/NaH6rM"]["http://i.stack.imgur.com/vtA9o.png"].