Suppose $i$ and $j$ are indices which take values $1, \dots, m$, and for each $i$ and $j$, we have a number $a_{ij}$. Note that $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$. If we were to sum over all possible values of $(i, j)$ we would have
$$\sum_{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}}a_{ij}$$
which could also be written as
$$\sum_{i \in \{1, \dots, m\}}\sum_{j\in\{1, \dots, m\}}a_{ij}$$
or more commonly
$$\sum_{i=1}^m\sum_{j=1}^ma_{ij}.$$
Sometimes, we are not interested in all possible pairs of indices, but only those pairs which satisfy some condition. In the example you are looking at, the pairs of indices of interest are $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$ such that $i \leq j$. One way to denote the sum over all such pairs of indices is
$$\sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij}$$
but this is rather cumbersome. It would be much more helpful if we could write it as a double sum as above. To do this, note that we can list all suitable pairs of indices, by first fixing $i \in \{1, \dots, m\}$ and then allowing $j$ to vary from $i$ to $m$ (as these are the only possible values of $j$ with $i \leq j$). Doing this, we obtain the double sum
$$\sum_{i=1}^m\sum_{j=i}^ma_{ij}.$$
Note, we could also have fixed $j \in \{1, \dots, m\}$ and then allowed $i$ to vary from $1$ to $j$ (as these are the only possible values of $i$ with $i \leq j$). Doing this, we obtain an alternative double sum
$$\sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$
The notation that you are asking about is yet another way to express the sum. That is,
$$\mathop{\sum\sum}_{i \leq j}a_{ij} = \sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij} = \sum_{i=1}^m\sum_{j=i}^ma_{ij} = \sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$
Bilinearity allows you to say
$$Cov(S_n,V_n) = Cov\left(\sum\limits_i X_i, \sum\limits_j X_jX_{j+1}\right) = \sum\limits_i \sum\limits_j Cov(X_i, X_jX_{j+1})$$
If you exclude what you call the independent terms where $i\not= j$ and $i \not=j+1$ giving a covariance of $0$, then this reduces to
$$Cov(S_n,V_n)= \sum\limits_i \left( Cov(X_i, X_i X_{i+1}) +Cov(X_i, X_{i-1} X_{i})\right)$$
with some slight adjustment in the sum when $i=1$ or $i=n$: overall you are adding up $2n-2$ covariance terms.
With your i.i.d. Bernoulli $X_i$, it seems $Cov(X_i, X_i X_{i+1}) = Cov(X_i, X_{i-1} X_{i})= p^2-p^3$, so overall $$Cov(S_n,V_n)= 2(n-1)p^2(1-p)$$
Best Answer
This question has been answered as part of another question I asked by the accepted answer (by David K) in this post: https://math.stackexchange.com/a/3984564/333560.
To summarize his explanation, the single summation gets used when only a specific sequence of values, corresponding to the summation indices, have a non-zero probability. Technically the other combination of values exist (as in the double summation), but the probability is zero we can omit them. This is the case when we formulate $N$ discrete random numbers as a a vector in $N$-dimensional space.