As I didn't really get a satisfactory answer (although thanks to user35071 for his link, and to those of you that commented), I have decided to answer my own question as best as I can (if I've missed anything important, please let me know in the comments):
- Calculating closed-form summations from known expressions.
- Approximation of infinite-calculus (forward-difference methods, numerical solutions to PDEs and ODEs, etc.)
It is often the case that we are trying to express the summation of some expression $f(x)$ in closed form, and this can be tricky, using standard-techniques.
For instance, computation of $\sum{x^{2}\:\delta x}$ can be done by using two known formulae:
$$x^{n}=\sum_{k}{\left\{n \atop k\right\}x^{\underline{k}}} \text{ and } \sum{x^{\underline{m}}\:\delta x}=\frac{x^{\underline{m+1}}}{(m+1)}\tag{1}$$
Where $\left\{n\atop k\right\}$ is read "$n$ subset $k$", and is a Stirling number of the second kind. And $x^{\underline{k}}=x(x-1)\cdots(x-k+1)$ is the falling-factorial function (note this is also written as the Pochhammer symbol: $(x)_{k}$).
Expanding $x^{2}$ in terms of falling factorials, we get:
$$x^{2}=x^{\underline{2}}+x^{\underline{1}}\implies\sum{x^{2}\:\delta x}=\sum{x^{\underline{2}}\:\delta x}+\sum{x^{\underline{1}}\:\delta x}$$
Using our known formulae in $(1)$, we get:
$$\sum{x^{2}\:\delta x}=\frac{x^{\underline{3}}}{3}+\frac{x^{\underline{2}}}{2}=\frac{x(x-1)(x-2)}{3}+\frac{x(x-1)}{2}=\frac{x(x-1)(2x-1)}{6}$$
Which corresponds to our standard closed form for $\sum_{k=1}^{n}{k^{2}}$.
If we don't restrict ourselves to integral finite-differences, then we can also use it to numerically approximate derivatives and integrals of continuous functions. There are several types of difference methods often used in numerical approximation to calculus,
$$\Delta_{h}[f](x)=f(x+h)-f(x) \tag{2}$$
$$\nabla_{h}[f](x)=f(x-h)-f(x) \tag{3}$$
$$\delta_{h}[f](x)=f\left(x+\frac{h}{2}\right)-f\left(x-\frac{h}{2}\right)\tag{4}$$
Where $(2)$ is called the forward-difference, $(3)$ is called the backwards-difference, and $(4)$ is called the central-difference. These can be used to approximate the derivative using the following formulae:
$$f'(x)\approx\frac{\Delta_{h}[f](x)}{h}\approx\frac{\nabla_{h}[f](x)}{h}\approx\frac{\delta_{h}[f](x)}{h} \tag{5}$$
Which is often used when a non-analytic evaluation of a derivative is required.
They didn’t just interchange the index variables in that first example: they exploited the fact that doing so does not change the sum. Thus, they were able to write
$$2S=\sum_{1\le j<k\le n}(a_k-a_j)(b_k-b_j)+\sum_{1\le k<j\le n}(a_j-a_k)(b_j-b_k)\;,$$
getting a sum in which every possible term of the form $(a_i-a_\ell)(b_i-b_\ell)$ with $1\le i,\ell\le n$ appears except those in which $i=\ell$.
It may help to think of this in matrix terms. For $1\le j,k\le n$ let $c_{j,k}=(a_k-a_j)(b_k-b_j)$, and let
$$C=\begin{bmatrix}c_{1,1}&\color{red}{c_{1,2}}&\color{red}{\ldots}&\color{red}{c_{1,n-1}}&\color{red}{c_{1,n}}\\
\color{blue}{c_{2,1}}&c_{2,2}&\color{red}{\ldots}&\color{red}{c_{2,n-1}}&\color{red}{c_{2,n}}\\
\color{blue}{\vdots}&\color{blue}{\vdots}&\ddots&\color{red}{\vdots}&\color{red}{\vdots}\\
\color{blue}{c_{n-1,1}}&\color{blue}{c_{n-1,2}}&\color{blue}{\ldots}&c_{n-1,n-1}&\color{red}{c_{n-1,n}}\\
\color{blue}{c_{n,1}}&\color{blue}{c_{n,2}}&\color{blue}{\ldots}&\color{blue}{c_{n,n-1}}&c_{n,n}
\end{bmatrix}\;;$$
then $S$ is the sum of the (red) entries above the main diagonal of $C$, and the sum with the indices interchanged is the sum of the (blue) entries below the main diagonal of $C$. In this very special case the matrix $C$ happens to be symmetric, so the two sums are equal. Nicer yet, the entries on the main diagonal are all $0$, so the sum of all the entries in $C$ is $2S$. Thus, $2S$ is just the sum of all possible products of the form $(a_k-a_j)(b_k-b_j)$ with $1\le j,k\le n$, which is very easy to compute after we rewrite $(a_k-a_j)(b_k-b_j)$ as $a_kb_k-a_kb_j-a_jb_k+a_jb_j$.
What makes this work is the symmetry of the matrix $C$: this is one of the ‘important special cases’ mentioned in the sentence in the middle of page $36$. Learning to recognize them is to a great extent a matter of experience. In general, though, the first thing to do is see whether you can simply evaluate the summations in order as they’re written; if that doesn’t look promising, the next step is to see whether you can reverse the order of summation and get something nicer. Neither of these approaches looks very attractive in the problem above, so at that point you look for some other idea. Recognizing that a summation over $1\le j<k\le n$ or $1\le j\le k\le n$ is a summation over the upper half (give or take the diagonal) of an $n\times n$ matrix, you can reasonably look to see whether the whole matrix would be easier to work with and whether there’s an exploitable relationship between the upper and lower halves. Here both of those turn out to be the case.
Best Answer
Given $e_1=2$ and $e_{n+1}=e_1\dots e_n+1$, then:
$$\frac{1}{e_1}+...+\frac 1 {e_n} = 1-\frac 1 {e_1\dots e_n} = 1-\frac 1 {e_{n+1}-1}$$
This can easily be proved by induction. Just compute:
$$\frac{1}{e_1}+...+\frac 1 {e_n} + \frac{1}{e_{n+1}} = 1-\frac 1 {e_{n+1}-1} + \frac{1}{e_{n+1}}$$
Just multiply it out and used that $e_{n+2}-1 = e_{n+1}(e_{n+1}-1)$.
You get the middle term of your question by noting that $$e_1\dots e_n = (e_1\dots e_{n-1})e_n = (e_n-1)e_n$$