The last sum presented won't converge. Here is a simpler approach:
$$\sum_{k=0}^n \binom{n}{k}\frac{B_k(x)}{n-k+1} =
\sum_{k=0}^n \binom{n}{k}B_k(x) \int_0^1 u^{n-k} du = \int_0^1 B_n(x+u) du$$
where in the last step an interchange of $\sum$ and $\int$ has been performed, and the 'translation identity' has been used (see the wiki page). Shift the integral and and you'll get
$$\sum_{k=0}^n \binom{n}{k}\frac{B_k(x)}{n-k+1}= \int_{x}^{x+1}B_n(u) \,du = x^n$$
where another ID from the wiki page has been used (which can be taken as a definition). If this is for a homework problem, then you'd probably want to prove the identities in this proof.
ADDENDUM:
Here's a proof of the 'translation theorem' using generating functions. (It's assumed the reader is familiar with the generating function for the Bernoulli polynomials.)
$$\quad (T) \quad \sum_{k=0}^n \binom{n}{k}\, u^{n-k} \,B_k(x) = B_n(x+u) $$
It's easy to see that
$$\quad (*) \quad \frac{t\,e^{t\,x}}{e^t-1} e^{t\,u} = \frac{t\,e^{t\,(x+u)}}{e^t-1}.$$
On the left-hand side (LHS), make a Cauchy product:
$$ LHS(*)=\sum_{k=0}^\infty \frac{t^k}{k!}B_k(x)\cdot \sum_{m=0}^\infty \frac{t^m}{m!}u^m = \sum_{n=0}^\infty \frac{t^n}{n!} \sum_{k=0}^n \binom{n}{k}\, u^{n-k} \,B_k(x).$$
On the RHS, use the generating function again. Formula (T) follows by equating coefficients of $t.$
"Normalized" in this context means "length $1$". So if you think of our sequences as infinite length vectors:
$$(a_1, a_2, \ldots ) \text{ and } (b_1, b_2, \ldots)$$
we can normalize them by looking at the new vectors
$$
\frac{(a_1, a_2, \ldots)}{\lVert (a_1, a_2, \ldots) \rVert}
\text{ and }
\frac{(b_1, b_2, \ldots)}{\lVert (b_1, b_2, \ldots) \rVert}.
$$
Here, intuitively, we divide by the length of our vector in order to make it have length $1$.
Now we recall $\lVert(a_1, a_2, \ldots)\rVert = \sqrt{\sum_{j=1}^\infty a_j^2}$, and we see that the new $\hat{a_k}$s are exactly entries of this new (normalized) vector. That is, $\hat{a_k} = \frac{a_k}{\lVert (a_1, a_2, \ldots) \rVert}$.
Notice this really does normalize our vectors in the sense that they end up having length $1$. This is what the author shows at the bottom of your screenshot:
$$
\lVert (\hat{a_1}, \hat{a_2}, \ldots) \rVert =
\left \lVert \frac{(a_1, a_2, \ldots)}{\lVert (a_1, a_2, \ldots) \rVert} \right \rVert =
\frac{\lVert (a_1, a_2, \ldots) \rVert}{\lVert (a_1, a_2, \ldots) \rVert} = 1.
$$
The author just writes this out explicitly (and uses the fact that $\sqrt{1} = 1$).
As for $\displaystyle \sum_j a_j$, that just means "sum over all the $j$". Life is too short to write $\displaystyle \sum_{j=1}^\infty$ all the time, so a lot of authors will just write
$\displaystyle \sum_j$ instead, and trust the reader to work out what the endpoints of the sum should be. For this particular sum, it's $1$ to $\infty$, since we're summing over every entry of our infinite vector.
I hope this helps ^_^
Best Answer
I’d carry out the last step a bit differently. What the first part shows is that $S_n$ is the coefficient of $x^n$ in the product
$$(1-x)^n\cdot\frac1{(1-x)^{n+1}}\;,\tag{1}$$
something that is often written
$$S_n=[x^n]\left((1-x)^n\cdot\frac1{(1-x)^{n+1}}\right)$$
with the $[x^n]$ operator. Clearly, then,
$$\begin{align*} S_n&=[x^n]\left(\frac{(1-x)^n}{(1-x)^{n+1}}\right)\\ &=[x^n]\left(\frac1{1-x}\right)\\ &=[x^n]\sum_{k\ge 0}x^k\\ &=1\;. \end{align*}$$
The $n$ in $(1)$ really does depend on which $S_n$ we’re computing, but $(1)$ simplifies to $\frac1{1-x}$ for all $n$, so in the end we really are looking at one power series.