They didn’t just interchange the index variables in that first example: they exploited the fact that doing so does not change the sum. Thus, they were able to write
$$2S=\sum_{1\le j<k\le n}(a_k-a_j)(b_k-b_j)+\sum_{1\le k<j\le n}(a_j-a_k)(b_j-b_k)\;,$$
getting a sum in which every possible term of the form $(a_i-a_\ell)(b_i-b_\ell)$ with $1\le i,\ell\le n$ appears except those in which $i=\ell$.
It may help to think of this in matrix terms. For $1\le j,k\le n$ let $c_{j,k}=(a_k-a_j)(b_k-b_j)$, and let
$$C=\begin{bmatrix}c_{1,1}&\color{red}{c_{1,2}}&\color{red}{\ldots}&\color{red}{c_{1,n-1}}&\color{red}{c_{1,n}}\\
\color{blue}{c_{2,1}}&c_{2,2}&\color{red}{\ldots}&\color{red}{c_{2,n-1}}&\color{red}{c_{2,n}}\\
\color{blue}{\vdots}&\color{blue}{\vdots}&\ddots&\color{red}{\vdots}&\color{red}{\vdots}\\
\color{blue}{c_{n-1,1}}&\color{blue}{c_{n-1,2}}&\color{blue}{\ldots}&c_{n-1,n-1}&\color{red}{c_{n-1,n}}\\
\color{blue}{c_{n,1}}&\color{blue}{c_{n,2}}&\color{blue}{\ldots}&\color{blue}{c_{n,n-1}}&c_{n,n}
\end{bmatrix}\;;$$
then $S$ is the sum of the (red) entries above the main diagonal of $C$, and the sum with the indices interchanged is the sum of the (blue) entries below the main diagonal of $C$. In this very special case the matrix $C$ happens to be symmetric, so the two sums are equal. Nicer yet, the entries on the main diagonal are all $0$, so the sum of all the entries in $C$ is $2S$. Thus, $2S$ is just the sum of all possible products of the form $(a_k-a_j)(b_k-b_j)$ with $1\le j,k\le n$, which is very easy to compute after we rewrite $(a_k-a_j)(b_k-b_j)$ as $a_kb_k-a_kb_j-a_jb_k+a_jb_j$.
What makes this work is the symmetry of the matrix $C$: this is one of the ‘important special cases’ mentioned in the sentence in the middle of page $36$. Learning to recognize them is to a great extent a matter of experience. In general, though, the first thing to do is see whether you can simply evaluate the summations in order as they’re written; if that doesn’t look promising, the next step is to see whether you can reverse the order of summation and get something nicer. Neither of these approaches looks very attractive in the problem above, so at that point you look for some other idea. Recognizing that a summation over $1\le j<k\le n$ or $1\le j\le k\le n$ is a summation over the upper half (give or take the diagonal) of an $n\times n$ matrix, you can reasonably look to see whether the whole matrix would be easier to work with and whether there’s an exploitable relationship between the upper and lower halves. Here both of those turn out to be the case.
It is "easy" once you write out the few terms in the series.
Your example, we are taking the derivative w.r.t to $y_j$.
Thus the only part of the summation we need to consider is the numerator in the expression.
$$
n\sum_j x_jy_j - \sum_i x_j \sum_jy_j
$$
this is basically
$$
n\left(x_1y_1 + x_2y_2 + \cdots x_ny_n\right) - \left(x_1 + x_2+ \cdots x_n\right)
\cdot \left(y_1 + y_2 + \cdots y_n\right)
$$
so lets take the derivative with the first index $y_1$
$$
n\frac{\partial}{\partial y_1}\left(x_1y_1 + x_2y_2 + \cdots x_ny_n\right) - \\
\frac{\partial}{\partial y_1}\left(x_1 + x_2+ \cdots x_n\right)\left(y_1 + y_2 + \cdots y_n\right)
\\= \frac{\partial}{\partial y_1}(x_1y_1) + \frac{\partial}{\partial y_1}(x_2y_2) + \cdots \frac{\partial}{\partial y_1}(x_ny_n) -\\
\left(x_1 + x_2+ \cdots x_n\right)\frac{\partial}{\partial y_1}\left(y_1 + y_2 + \cdots y_n\right)
$$
as we can see the only terms that survive are
$$
nx_1\cdot \frac{\partial}{\partial y_1}y_1 - \left(x_1 + x_2+ \cdots x_n\right)\cdot \frac{\partial}{\partial y_1}y_1 = x_1 - \left(x_1 + x_2+ \cdots x_n\right) = x_1 -
\sum_j x_j
$$
with all other terms going to zero.
Rinse and repeat and you will see the pattern, that the only terms that survive are
$$
x_j - \sum_j x_j
$$
Best Answer
Suppose $i$ and $j$ are indices which take values $1, \dots, m$, and for each $i$ and $j$, we have a number $a_{ij}$. Note that $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$. If we were to sum over all possible values of $(i, j)$ we would have
$$\sum_{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}}a_{ij}$$
which could also be written as
$$\sum_{i \in \{1, \dots, m\}}\sum_{j\in\{1, \dots, m\}}a_{ij}$$
or more commonly
$$\sum_{i=1}^m\sum_{j=1}^ma_{ij}.$$
Sometimes, we are not interested in all possible pairs of indices, but only those pairs which satisfy some condition. In the example you are looking at, the pairs of indices of interest are $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$ such that $i \leq j$. One way to denote the sum over all such pairs of indices is
$$\sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij}$$
but this is rather cumbersome. It would be much more helpful if we could write it as a double sum as above. To do this, note that we can list all suitable pairs of indices, by first fixing $i \in \{1, \dots, m\}$ and then allowing $j$ to vary from $i$ to $m$ (as these are the only possible values of $j$ with $i \leq j$). Doing this, we obtain the double sum
$$\sum_{i=1}^m\sum_{j=i}^ma_{ij}.$$
Note, we could also have fixed $j \in \{1, \dots, m\}$ and then allowed $i$ to vary from $1$ to $j$ (as these are the only possible values of $i$ with $i \leq j$). Doing this, we obtain an alternative double sum
$$\sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$
The notation that you are asking about is yet another way to express the sum. That is,
$$\mathop{\sum\sum}_{i \leq j}a_{ij} = \sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij} = \sum_{i=1}^m\sum_{j=i}^ma_{ij} = \sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$