I am looking for stochastic methods for solving a very high-dimensional PDE (with one time dimension and very large number of spatial dimensions), which would reduce it to a lower-dimensional problem, probably at the cost of carrying out a Monte Carlo simulation. Any pointers?
[Math] Stochastic methods for solving very high-dimensional PDE
ap.analysis-of-pdesstochastic-calculus
Related Solutions
What's the CFL condition of the method for your problem? This is the main thing missing from the formulation of the problem and many times at the heart of instabilities of the type you discuss. As the wikipedia site says, implicit method are usually better for maintaining a reasonable CFL condition.
The CFL condition will tell you the relationship between $\Delta t$ and $\Delta x$ that causes spurious oscillations to die down. By figuring this out in advance you can find a method that gives you a relationship that is better for your problem.
The standard way to find the CFL condition is to look for a wave solution $e^{\omega t + k\dot x}$, plug it into the numerical method and see what the numerical dispersion relation (between $\omega$ and $k$) is. The point is that for a $k$ that corresponds to an oscillation every $2\Delta x$ ($k=\pi/\Delta x$) you must have decay (meaning $Re(\omega)<0$, imaginary part not important).
The condition that the real part is negative is the CFL condition. Now, for your problem, wave solutions are probably not going to be solutions....because of the nonconstant f(x). so I would replace both by constants just to see how the sizes of f and f'' play in the CFL condition....
PDE books often discuss classification, but they always restrict attention to the case of second order equations, especially for one function of several variables, with good reason. The point of a classification is to find categories of PDE whose analysis has many common features, but there really isn't any general classification in that sense, since the world of PDE is a huge zoo (once you leave the 3 familiar families of elliptic, hyperbolic and parabolic). Think about how you would define parabolic PDEs, even in second order. You already need to look beyond the symbol to distinguish $\partial_t u=\partial_{xx} u$ from $0=\partial_{xx} u$. As the OP points out, the symbol is certainly an important part of the ``classification''. The symbol is only a part of the tableau, which gives a little more information in an algebraic format; see the book of Bryant, et. al, Exterior Differential Systems. But systems of differential equations with the same tableau often have different analysis. Think about the famous Lewy counterexample. There are so many very different genera of animals in the zoo, and broad classifications don't give us much insight. Also look at Gromov, Partial Differential Relations, for lots of examples of PDEs that are locally the same, but globally very different, and are nothing like elliptic, hyperbolic or parabolic. So question 1: yes, question 2: hyperbolic is tricky to define beyond second order, because already for second order, hyperbolic is very different from ultrahyperbolic, so you really need something to distinguish a Lorentzian geometry from a more general pseudo-Riemannian geometry. On the other hand, your definition of ellipticity is perfect, and does give us some tools to carry out analysis. question 3: a little bit like yes, in that each PDE system gives rise to an algebraic variety, but finally no in that the classification of constant coefficient PDE systems is much finer than the classification of their symbols (it is in fact exactly the classification of their tableau), question 4: yes, you prolong until you hit involution, and so the classification of involutive tableau is not known, a huge messy algebra problem, question 5: like biology, it is messy because there are too many very different animals.
Best Answer
It seems to me this question "has not received enough attention" because of the conflation of two issues: dimensional reduction of a high-dimensional PDE and stochastic (Monte Carlo) integration of the PDE.
The socalled "curse of dimensionality" refers to the fact that the computational time required for a non-stochastic solution of a PDE grows exponentially with the number of dimensions (each dimension needs to be discretized, say with $N$ points, so the total number of points in $d$ dimensions is $N^d$). Dimensional reduction is then imperative, and one method to achieve that is principal component analysis.
The computational time for a solution of a PDE by Monte Carlo integration grows only linearly with dimension, so dimensional reduction is not needed. The accuracy of this approach is low, and this is why one tries to avoid resorting to a Monte Carlo method.
The two approaches to the solution of a high-dimensional PDE, dimensional reduction by principal component analysis and Monte Carlo integration, are compared in these lecture notes.
Upon some more search, I found one dimensional-reduction scheme with a stochastic component. It goes by the acronym RS-HDMR = Random-Sampling-High-Dimensional-Modeling-Representation and is described here. (The HDMR Wikipedia page could use some expansion...)