They didn’t just interchange the index variables in that first example: they exploited the fact that doing so does not change the sum. Thus, they were able to write
$$2S=\sum_{1\le j<k\le n}(a_k-a_j)(b_k-b_j)+\sum_{1\le k<j\le n}(a_j-a_k)(b_j-b_k)\;,$$
getting a sum in which every possible term of the form $(a_i-a_\ell)(b_i-b_\ell)$ with $1\le i,\ell\le n$ appears except those in which $i=\ell$.
It may help to think of this in matrix terms. For $1\le j,k\le n$ let $c_{j,k}=(a_k-a_j)(b_k-b_j)$, and let
$$C=\begin{bmatrix}c_{1,1}&\color{red}{c_{1,2}}&\color{red}{\ldots}&\color{red}{c_{1,n-1}}&\color{red}{c_{1,n}}\\
\color{blue}{c_{2,1}}&c_{2,2}&\color{red}{\ldots}&\color{red}{c_{2,n-1}}&\color{red}{c_{2,n}}\\
\color{blue}{\vdots}&\color{blue}{\vdots}&\ddots&\color{red}{\vdots}&\color{red}{\vdots}\\
\color{blue}{c_{n-1,1}}&\color{blue}{c_{n-1,2}}&\color{blue}{\ldots}&c_{n-1,n-1}&\color{red}{c_{n-1,n}}\\
\color{blue}{c_{n,1}}&\color{blue}{c_{n,2}}&\color{blue}{\ldots}&\color{blue}{c_{n,n-1}}&c_{n,n}
\end{bmatrix}\;;$$
then $S$ is the sum of the (red) entries above the main diagonal of $C$, and the sum with the indices interchanged is the sum of the (blue) entries below the main diagonal of $C$. In this very special case the matrix $C$ happens to be symmetric, so the two sums are equal. Nicer yet, the entries on the main diagonal are all $0$, so the sum of all the entries in $C$ is $2S$. Thus, $2S$ is just the sum of all possible products of the form $(a_k-a_j)(b_k-b_j)$ with $1\le j,k\le n$, which is very easy to compute after we rewrite $(a_k-a_j)(b_k-b_j)$ as $a_kb_k-a_kb_j-a_jb_k+a_jb_j$.
What makes this work is the symmetry of the matrix $C$: this is one of the ‘important special cases’ mentioned in the sentence in the middle of page $36$. Learning to recognize them is to a great extent a matter of experience. In general, though, the first thing to do is see whether you can simply evaluate the summations in order as they’re written; if that doesn’t look promising, the next step is to see whether you can reverse the order of summation and get something nicer. Neither of these approaches looks very attractive in the problem above, so at that point you look for some other idea. Recognizing that a summation over $1\le j<k\le n$ or $1\le j\le k\le n$ is a summation over the upper half (give or take the diagonal) of an $n\times n$ matrix, you can reasonably look to see whether the whole matrix would be easier to work with and whether there’s an exploitable relationship between the upper and lower halves. Here both of those turn out to be the case.
Best Answer
The author's contention seems to be that the two sums listed are the same, as looping over the values in the set $0 \leq n-k \leq n$ is the same as looping over $k$ such that $0 \leq k \leq n$. To see why this is true, we just need to see that the values they loop over are the same. Then, by the commutativity of addition, the order of the values appearing in the sum doesn't matter, so the two sums can be taken to be equal.
Then it is a matter of seeing that if $0 \leq k \leq n$ defines the same set as $0 \leq n-k \leq n$, which we can see as follows:
$k \in \mathbb{N}$ such that $0 \leq k \leq n$ corresponds to the set of values $\{0, 1, 2, \ldots, n-1, n\}$.
$n-k \in \mathbb{N}$, then, such that $0 \leq (n-k) \leq n$, corresponds to the set of values $\{n, n-1, n-2, \ldots, 1, 0\}$.
Clearly, these two sets are the same.