Physics – Need for Non-Analytic Smooth Functions?

analytic-functionsap.analysis-of-pdesmp.mathematical-physicssoft-question

Observing the behaviour of a few physicists "in nature", I had the impression that among the mathematical tools they use a lot (along with possibly much more sofisticated maths, of course), there is certainly Taylor expansion. They have a quantity (function) that they need to approximate: they expand it in Taylor series, keep the order of approximation that is useful for their purposes, and discard the irrelevant terms.

Appearently, there is little preoccupation for mathematically justifying this procedure, even if the to-be-approximated quantity is not given by an explicit form which is clearly known to be analytic. As Physics clearly gets no problems from the above mathematical subtleties, this may just mean that the distinction between analytic and smooth functions is somehow irrelevant to the basic equations of physics, or rather to the approximations of their solutions that are empirically testable.

If non-analytic smooth functions are irrelevant to Physics, why is it so?

Are there equations of physical importance in which non-analytic smooth solutions actually are important and cannot be safely considered "as if they were analytic" for the approximation purposes?

Remark: analogous questions may arise about Fourier series expansions.

One possible way the practice goes might be:

  1. Consider a (differential or otherwise) equation $P(f)=0$ usually with analytic coefficients.
  2. Expand the coefficients in Taylor series around a point in the scale of physical interest.
  3. Discard higher order terms obtaining an approximated equation with polynomial coefficients $\tilde{P}(f)=0$.
  4. Make the ansatz that the solutions $f$ of interest must be analytic.
  5. Find the coefficients of $f$ by hand or by other means.

This leaves open the question why the ansatz is mathematically justified, if the equation of interest was $P$ not $\tilde{P}$. Do analytic solutions of $\tilde{P}$ aptly approximate solutions of $P$? Edit: I understand now that these last two lines are not very well formulated. Perhaps, ignoring the $\tilde{P}$ thing, I should have just asked something like:

Given any $\epsilon>0$, does knowing the analytic solutions (i.e. knowing their coefficients, possibly up to an arbitrarily large but finite number of digits) of $P$ give all the information about all solutions of $P$ up to $\epsilon$-approximation? Are there physically well known classes of equations $P$ in which this may not happen (perhaps even up to taking very regular approximations of the coefficients/parameters of $P$ itself)?

Best Answer

As a physicist "in nature" perhaps I can give a few examples that illustrate how non-analytic functions can appear in physics and counter the idea that physicists do not worry about the justification of these procedures.

Example 1 involves one of the most precise comparisons between experiment and theory known to physics, namely the g factor of the electron. The quantity g is a proportionality factor between the spin of the electron and its magnetic moment. Perturbation theory in QED gives a formula $$g-2= c_1 \alpha + c_2 \alpha^2 + c_3 \alpha^3 + \cdots $$ where the coefficients $c_i$ can be computed from i-loop Feynman diagrams and $\alpha=e^2/\hbar c \simeq 1/137$ is the fine structure constant. Including up to four loop diagrams gives an expression for $g$ which agrees to one part in $10^{8}$ with experiment. Yet it is known that that this perturbative series has zero radius of convergence. This is true quite generally in quantum field theory. Physicists do not ignore this, rather they regard it as evidence that QFT's are not defined by their perturbation series but must also include non-perturbative effects, generally of the form $e^{-c/g^2}$ with $g$ a dimensionless coupling constant. Much effort has gone into understanding these non-perturbative effects in a variety of quantum field theories. Instanton effects in non-Abelian gauge theory are an important example of non-perturbative phenomena.

Example 2 involves the Hydrogen atom in an electric field of magnitude $E$, aka the Stark effect. One can compute the shift in the energy eigenvalues of the Hydrogen atom Hamiltonian due to the applied electric field as a power series in $E$ using perturbation theory and again one finds excellent agreement with experiment. One can also prove that this series has zero radius of convergence. In fact, the Hamiltonian is not bounded from below and does not have any normalizable energy eigenstates. The physics of this situation explains what is going on. The electron can tunnel through the potential barrier and escape from being bound to the nucleus of the Hydrogen atom, but for reasonable size electric fields the lifetime of these states exceeds the age of the universe. The perturbation theory does not converge because there are no energy eigenstates to converge to, but it still provides an excellent approximation to the energy eigenstates measured experimentally because the experiments are done on a time scale which is very short compared to the lifetime of the metastable state.

So I would say that at least in these examples there is a very nice interplay between the physics and the mathematics. The lack of analyticity has a clear physical interpretation and this is something that is understood by physicists. Of course I'm sure there are other example where such approximations are made without a clear physical justification, but this just means that one should understand the physics better.