Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?
Yes. An example is the nuclear shell model as formulated by Maria Goeppert Mayer in the 1950's. (The same would also apply to, for example, the interacting boson model.) The way this type of shell model works is that you take a nucleus that is close to a closed shell in both neutrons and protons, and you treat it as an inert core with some number of particles and holes, e.g., $^{41}\text{K}$ (potassium-41) would be treated as one proton hole coupled to two neutrons. There is some vector space of possible states for these three particles, and there is a Hamiltonian that has to be diagonalized. When you diagonalize the Hamiltonian, you have a prediction of the energy levels of the nucleus.
You do have to determine the matrix elements of the Hamiltonian in whatever basis you've chosen. There are various methods for estimating these. (They cannot be determined purely from the theory of quarks and gluons, at least not with the present state of the art.) In many cases, I think these estimates are actually done by some combination of theoretical estimation and empirical fitting of parameters to observed data. If you look at how practitioners have actually estimated them, I'm sure their notebooks do contain lots of calculus, including partial derivatives, or else they are recycling other people's results that were certainly not done in a world where nobody knew about partial derivatives. But that doesn't mean that they really require partial derivatives in order to find them.
As an example, people often use a basis consisting of solutions to the position-space Schrodinger equation for the harmonic oscillator. This is a partial differential equation because it contains the kinetic energy operator, which is basically the Laplacian. But the reality is that the matrix elements of this operator can probably be found without ever explicitly writing down a wavefunction in the position basis and calculating a Laplacian. E.g., there are algebraic methods. And in any case many of the matrix elements in such models are simply fitted to the data.
The interacting boson model (IBM) is probably an even purer example of this, although I know less about it. It's a purely algebraic model. Although its advocates claim that it is in some sense derivable as an approximation to a more fundamental model, I don't think anyone ever actually has succeeded in determining the IBM's parameters for a specific nucleus from first principles. The parameters are simply fitted to the data.
Looking at this from a broader perspective, here is what I think is going on. If you ask a physicist how the laws of physics work, they will probably say that the laws of physics are all wave equations. Wave equations are partial differential equations. However, all of our physical theories except for general relativity fall under the umbrella of quantum mechanics, and quantum mechanics is perfectly linear. There is a no-go theorem by Gisin that says you basically can't get a sensible theory by adding a nonlinearity to quantum mechanics. Because of the perfect linearity, our physical theories can also just be described as exercises in linear algebra, and we can forget about a specific basis, such as the basis consisting of Dirac delta functions in position space.
In terms of linear algebra, there is the problem of determining what is the Hamiltonian. If we don't have any systematic way of determining what is an appropriate Hamiltonian, then we get a theory that lacks predictive power. Even for a finite-dimensional space (such as the shell model), an $n$-dimensional space has $O(n^2)$ unknown matrix elements in its Hamiltonian. Determining these purely by fitting to experimental data would be a vacuous exercise, since typically the number of observations we have available is $O(n)$. One way to determine all these matrix elements is to require that the theory consist of solutions to some differential equation. But there is no edict from God that says this is the only way to do so. There are other methods, such as algebraic methods that exploit symmetries. This is the kind of thing that the models described above do, either partially or exclusively.
References
Gisin, "Weinberg's non-linear quantum mechanics and supraluminal communications," http://dx.doi.org/10.1016/0375-9601(90)90786-N , Physics Letters A 143(1-2):1-2
Let $g(x) = e^{-x} f(x)$, so that $f(x) = e^x g(x)$. For a given $x$, $f'(x)$ exists if and only if $g'(x)$ exists, and $g'(x) = e^{-x} (f'(x) - f(x))$. In particular, $f'(x) = f(x)$ if and only if $g'(x) = 0$.
It follows that $f$ is necessarily of the form $f(x) = e^x g(x)$, where $g$ satisfies $g'(x) = 0$ almost everywhere.
Best Answer
Here is an example for which we have a "natural" nonlinear PDE for which solutions are known to be everywhere differentiable and conjectured-- but not yet proved-- to be $C^1$.
Suppose that $\Omega$ is a smooth bounded domain in $\mathbb R^d$ and $g$ is a smooth function defined on the boundary, $\partial \Omega$. Consider the prototypical problem in the "$L^\infty$ calculus of variations" which is to find an extension $u$ of $g$ to the closure of $\Omega$ which minimizes $\| Du \|_{L^\infty(\Omega)}$, or equivalently, the Lipschitz constant of $u$ on $\Omega$. When properly phrased, this leads to the infinity Laplace equation $$ -\Delta_\infty u : = \sum_{i,j=1}^d \partial_{ij} u\, \partial_i u \, \partial_j u = 0, $$ which is the Euler-Lagrange equation of the optimization problem.
The (unique, weak) solution of this equation (subject to the boundary condition) characterizes the correct notion of minimal Lipschitz extension. It is known to be everywhere differentiable by a result of Evans and Smart: http://math.mit.edu/~smart/differentiability.ae.pdf.
It is conjectured to be $C^{1,1/3}$, or anyway at least $C^1$. It is known to be $C^{1,\alpha}$ for some $\alpha>0$ in dimension $d=2$ (due to O. Savin), but the problem remains open in dimensions $d\geq 3$.
I am unaware of any other situation in PDE where the regularity gets blocked between differentiability and $C^1$. Typically, if you can prove something is differentiable, the proof can be made quantitative enough to give $C^1$ with a modulus.