[Math] the motivation for analytic solutions in Mathematical Physics

mathematical physicspartial differential equations

I am trying to understand why one cares about solving PDE's with an analytic/theoretical solution when one can use numerical methods?

If you tell me, "only mathematicians try to find theoretical solutions and understand them", I can live with that, after all that is part of what mathematics is all about. But it seems that physicists and engineers also care about theoretical solutions. What is their motivation?

To expand my question, consider any application involving Bessel functions. Even the simplest PDE will lead to some nasty series. When one has an actual PDE, or something real, it will be a lot messier, so I doubt that anyone building something specific based on the Bessel series will work with it analytically.

What is to be gained by solving the PDE for a vibrating membrane? Is it because the theoretical solutions imply certain physical laws that govern the process? Perhaps this is what the numerical approach is missing.

Best Answer

There are a number of reasons I can think of:

  1. An exact solution in terms of special functions allows you to work from tables of these functions, so you only need to have calculations based on a limited set of common functions. Of course, this is less relevant these days with these new-fangled steam-calculator computer things on everyone's desks, but Abramowitz and Stegun is half special function tables for a reason.

  2. Structure. Given explicit solutions, one is far more able to examine the wider properties of the solution. In particular, suppose I have an equation with a parameter in it. How do I study what happens to the solution as the parameter varies, if I don't have special function solutions? How do you know you're seeing all the behaviour?

  3. Wider validity. What if my numerical algorithm doesn't converge? If I have a series, it may be possible to transform it so that it converges much more quickly, or indeed, converges at all. This is why theta functions are so useful: the convergence of the normal series may be slow, but applying Jacobi's imaginary transformation will make the convergence (and hence the calculation much shorter). How would I know that without special function properties?

  4. How do I actually know my calculation is right? Or: How do I know artefacts in my numerical calculation are a result of what I have done, rather than what the function actually does? See the Wilbraham–Gibbs phenomenon in Fourier series: it was (possibly...) first discovered by a chap using a numerical integrator, and seeing these extraneous oscillations near discontinuities.

Oh, and not forgetting the mathematical physicist's answer:

  1. Cos it's cool, dammit!
Related Question