[Physics] Divergent Series

mathematical physicsperturbation-theoryquantum-field-theoryregularizationrenormalization

Why is it that divergent series make sense?

Specifically, by basic calculus a sum such as $1 – 1 + 1 …$ describes a divergent series (where divergent := non-convergent sequence of partial sums) but, as described in these videos, one can use Euler, Borel or generic summation to arrive at a value of $\tfrac{1}{2}$ for this sum.

The first apparent indication that this makes any sense is the claim that the summation machine 'really' works in the complex plane, so that, for a sum like $1 + 2 + 4 + 8 + … = -1$ there is some process like this:

enter image description here

going on on a unit circle in the complex plane, where those lines will go all the way back to $-1$ explaining why it gives that sum.

The claim seems to be that when we have a divergent series it is a non-convergent way of representing a function, thus we just need to change the way we express it, e.g. in the way one analytically continues the Gamma function to a larger domain. This claim doesn't make any sense out of the picture above, & I don't see (or can't find or think of) any justification for believing it.

Futhermore there is this notion of Cesaro summation one uses in Fourier theory. For some reason one can construct these Cesaro sums to get convergence when you have a divergent Fourier series & prove some form of convergence, where in the world does such a notion come from? It just seems as though you are defining something to work when it doesn't, obviously I'm missing something.

I've really tried to find some answers to these questions but I can't. Typical of the explanations is this summary of Hardy's divergent series book, just plowing right ahead without explaining or justifying the concepts.

I really need some general intuition for these things for beginning to work with perturbation series expansions in quantum mechanics & quantum field theory, finding 'the real' expanation for WKB theory etc. It would be so great if somebody could just say something that links all these threads together.

Best Answer

I've been thinking about divergent series on and off, so maybe I could chip in.

Consider a sequence of numbers (in an arbitrary field, e.g. real numbers) $\{a_n\}$. You may ask about the sum of terms of this sequence, i. e. $\sum a_n$. If the limit $\lim_{N\rightarrow\infty} \sum^N |a_n|$ exists then the series is absolutely convergent and you may talk about the sum $\sum a_n$. In case the limit does not exist but $\lim_{N\rightarrow\infty} \sum^N a_n$ exists then the sequence is conditionally convergent, and as (I assume) Carl Witthoft commented above there is a theorem stating that you may sum the sequence in a different order and get a different result for the limit. In fact by judiciously rearranging you may get any number desired. I included this just to mention that although divergent series may seem most bizarre, in the sense of summing terms and that by each term it gets nearer a limit, only the absolutely convergent series make connection with our intuiton. So we may ask about making sense of series in general.

As G. H. Hardy's "Divergent Series" explain in page 6, the trick is to understand that our usual notion of sum of a serie is a way to define something we call a "sum". In other words given a sequence we have a map that atributes to this sequence a number. The "sum map" being the trivial operation of summing the terms if the series is absolutely convergent. The idea behind divergent series is to realize that this map althogh in a sense canonical is not unique.

To be more specific, consider the space $V$ of all sequences together with operations of addition and scalar multiplication (given two sequences $\{a_n\}$ and $\{b_n\}$ and a number $\lambda$ we define addition by $\{a_n\}+\{b_n\}=\{a_n+b_n\}$ and scalar multiplication by $\lambda\cdot\{a_n\}=\{\lambda a_n\}$). Now the space of sequences with these operations is a (infinite-dimensional) vector space (there is a good question about coordinates, since I am assuming one specific basis here, but let's not worry about this now). The absolutely convergent series can be seen to form a subspace $U$ of $V$ and the "sum" $S$ is just a linear functional on this subspace, $S:U\rightarrow\mathbb{R}$. The problem is that this functional is not defined anywhere else.

So to make sense of divergent series one asks if is there another map $S'$ defined on a subspace $W$ that contains $U\subset W$ such that when restricted to this subspace is just the usual sum, i.e. is there $S'$ such that $S'|_U=S$?

And in fact many such functionals do exist. And each of these we call a different "summation method" in the sense that it attributes a value to a sequence and that when such sequence corresponds to a convergent series it gives the usual values.

For instance, Cesaro Summation says that maybe the series does not converge because it keeps oscillating (like the $1-1+1\cdots$ you mentioned). Then we could take the arithmetic mean of the partial sums $s_n=a_1+\cdots a_n$ and define the Cesaro sum of the sequence $\{a_n\}$ as $\lim_{N\rightarrow\infty}\frac{1}{N}\sum^N s_n$. Is not hard to see that this gives the usual result for convergent series (although a bit obscure that it is a linear map), but it also gives $1/2$ to the alternating series of $1-1+1\cdots$. So one must give a new meaning to the word "sum", and then you can get new results. For instance Fejer's theorem roughly states that (given mild conditions) the Fourier series of a function may not be stricty convergent, but it is always summable in the sense of Cesaro. So it tells you that the worst divergence that appears in Fourier series is of oscillating type, i. e. the series never diverges to $\pm\infty$. Furthermore by Cesaro summing you can tell around which value the series oscillates about. But this does not "sum" the series in the sense of making it convergent in the usual sense.

Other ideas for functionals is by analytic continuation. The most obvious is the geometric series $\sum x^n=\frac{1}{1-x}$. Is is only convergent in the $|x|\leq 1$ radius, but one may use analytic continuation to turn around the problem at $x=1$ and say that $1+2+4+\cdots=\frac{1}{1-2}=-1$. In this case is easier to picture the linear algebra idea. The space $U$ is of all geometric series with $|x|\leq 1$ and $S$ is the usual sum. Now we introduce a functional $S'$ by $S':W\rightarrow\mathbb{R},x\mapsto\frac{1}{1-x}$ so that $W$ is now every geometric series except the one with $x=1$. A functional which reduces to this case for the geometric series but does it also for other power series is Abelian Summation.

So the idea is not to "sum" the series really (only absolutely convergent series sum in usual sense) but to redefine the notion of sum by generalizing the concept and then using this different notion to attribute a finite value to the serie through the corresponding sequence. This finite value should tell you something about the series, like the Cesaro sum tells you around which value the series oscillates or like Abel sum is able to reconstruct the function that generated the series. So summation methods are able to extract information from divergent series, and this is how to make sense of them.

With respect to the physics, is important to stress that perturbative series in quantum field theory are (generally) divergent but neither renormalization nor regularization have to do (fundamentally) with "summing" divergent series (zeta regularization being one technique, not mandatory, although useful). Rather what does occur is that it sometimes one gets a asymptotic serie in perturbation theory. In general there are different functions with the same asymptotic serie, but with supplementary information it may, or may not, occur that one can uniquely find the function with that specific asymptotic series. In this case one can use a summation method known as Borel Summation to fully reconstruct the entire function. When such thing happens in QFT is normally associated with the presence of some sort of instanton. You can take a further look in S. Weinbergs "Quantum Field Theory Vol. 2", page 283. So the idea is to get non-perturbative information out of the perturbation series, and not to tame some sort of infinity. Renormalization is something completely different (and much worse since in fact it is highly non-linear for starters).

For further information try finding a copy of Hardy's book (it's a gem), or for the linear algebra babble J. Boos, F. P. Cass "Classical and Modern Methods In Summability".