It is not clear exactly what is being asked, but here is my take on what is going on in the proof and how it applies to $(\ast)$.
The idea of the proof is to use that $\sigma_n$ is Cauchy and that $s_n-s_{n-1}$ is small to estimate $s_n-\sigma_n$.
Start with a simple equation which localizes the average of $s_k$ for large $k$
$$
\sum_{k=m+1}^ns_k=\color{#C00000}{\sum_{k=0}^ns_k}-\color{#00A000}{\sum_{k=0}^ms_k}\tag{1}
$$
and rewrite the red and green sums using $\sigma_n$
$$
\sum_{k=m+1}^ns_k=\color{#C00000}{(n+1)\sigma_n}-\color{#00A000}{(m+1)\sigma_m}\tag{2}
$$
This writes things nicely as $(n-m)$ sigmas, so we subtract from a like number of $s_n$ to get closer to the goal of $s_n-\sigma_n$.
Subtract both sides of $(2)$ from $\displaystyle\sum_{k=m+1}^ns_n=(n-m)s_n$ to exploit the small size of $s_n-s_{n-1}$
$$
\begin{align}
\sum_{k=m+1}^n(s_n-s_k)
&=(n-m)s_n-\Big[(n+1)\sigma_n-(m+1)\sigma_m\Big]\\
&=(n-m)s_n-\Big[(n-m)\sigma_n+(m+1)\sigma_n-(m+1)\sigma_m\Big]\\[8pt]
&=(n-m)(s_n-\sigma_n)-(m+1)(\sigma_n-\sigma_m)\tag{3}
\end{align}
$$
This gives the desired quantity, $s_n-\sigma_n$, as a sum of controllable terms: $s_n-s_k$ and $\sigma_n-\sigma_m$.
Add $(m+1)(\sigma_n-\sigma_m)$ to both sides to isolate $s_n-\sigma_n$
$$
(m+1)(\sigma_n-\sigma_m)+\sum_{k=m+1}^n(s_n-s_k)=(n-m)(s_n-\sigma_n)
$$
Divide both sides by $n-m$ to get $(\ast)$
$$
\frac{m+1}{n-m}(\sigma_n-\sigma_m)+\frac1{n-m}\sum_{k=m+1}^n(s_n-s_k)=(s_n-\sigma_n)\tag{$\ast$}
$$
For large $m$ and $n$, $\sigma_n-\sigma_m$ is small since $\sigma_n$ is Cauchy and $\displaystyle\sum_{k=m+1}^n(s_n-s_k)$ is small because $s_n-s_{n-1}$ is small. This is the general idea; the details are in the outline of the proof.
I don't believe you can use the converse of Stolz-Cesaro because as discussed in the comment, $n$ is in the denominator and $\frac{n}{n+1} \to 1$ as $n \to \infty$.
For an alternative, write $a_n = a + \epsilon_n$ where we have $\epsilon_n \to 0$ as $n \to \infty$. We then have
$$\sum_{j=1}^n a_j b_{n+1-j} = \underbrace{a\sum_{j=1}^nb_{n+1-j}}_{X_n} + \underbrace{\sum_{j=1}^n\epsilon_jb_{n+1-j}}_{Y_n}$$
Note that
$$\lim_{n \to \infty} X_n = \lim_{n \to \infty}a\sum_{j=1}^nb_{n+1-j} = \lim_{n \to \infty}a\sum_{j=1}^nb_{j} = aS$$
Since $\epsilon_n \to 0$, there exists $N \in \mathbb{N}$ such that $|\epsilon_n| < \epsilon$ for all $n > N$.
Thus, with $n > N$,
$$\tag{*}|Y_n| = \left|\sum_{j=1}^n\epsilon_jb_{n+1-j}\right| \leqslant \left|\sum_{j=1}^N\epsilon_jb_{n+1-j}\right|+\left|\sum_{j=N+1}^n\epsilon_jb_{n+1-j}\right|\\ \leqslant \left|\sum_{j=1}^N\epsilon_jb_{n+1-j}\right|+\sum_{j=N+1}^n|\epsilon_j|\,|b_{n+1-j}|$$
For the second sum on the RHS of (*), since $b_n \geqslant 0$, we have
$$\sum_{j=N+1}^n|\epsilon_j|\,|b_{n+1-j}| = \sum_{j=N+1}^n|\epsilon_j|\,b_{n+1-j} < \epsilon \sum_{j=N+1}^n\,b_{n+1-j} = \epsilon \sum_{j=1}^{n - N}\,b_{j} < \epsilon S$$
Hence,
$$\tag{**}|Y_n| < \left|\sum_{j=1}^N\epsilon_jb_{n+1-j}\right| + \epsilon S$$
Since, $\sum b_n$ converges we have $b_n \to 0$ as $n \to \infty$. Since $N$ is fixed, for the first sum on the RHS (**) we get
$$\lim_{n \to \infty}\left|\sum_{j=1}^N\epsilon_jb_{n+1-j}\right| = 0 $$
Since $\epsilon >0$ can be arbitrarily small, it follows that $Y_n \to 0$ as $n \to \infty$, and
$$\lim_{n \to \infty}\sum_{j=1}^n a_j b_{n+1-j} = \lim_{n \to \infty} X_n + \lim_{n \to \infty} Y_n = aS$$
Best Answer
Your mistake is here:
But $$ \lim_{n\to\infty}\frac{a_{n}}{b_{n}}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}a_{k} $$
Actually, $$ \lim_{n\to\infty}\frac{a_{n}}{b_{n}}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\frac 1 k=0.$$