$f(x_0) = \sum\limits_{n=1}^\infty (-1)^{n+1} \cdot \frac{x^n}{n} + r_n$
That should say
$$f(x)=\sum_{n=1}^k (-1)^{n+1} \cdot \frac{x^n}{n} + r_k(x),$$
where $r_k$ is the error term of the $k^\text{th}$ partial sum. You want to use estimates to show that the error term goes to $0$ as $k$ goes to $\infty$, which will justify convergence of the series to $f(x)=\log(1+x)$.
Edit: I've struck through part of my answer that relied on a wrong estimate of the derivatives, as pointed out by Robert Pollack. With the missing $k!$ term, the estimate only works on $[-\frac{1}{2},1)$.
Added: To make this answer a little more useful, I decided to look up a correct method. Spivak in his book Calculus (3rd Edition, page 423) uses the formula
$$\frac{1}{1+t}=1-t+t^2-\cdots+(-1)^{n-1}t^{n-1}+\frac{(-1)^nt^n}{1+t}$$
in order to write the remainder as $r_n(x)=(-1)^n\int_0^x\frac{t^n}{1+t}dt$. The estimate $\int_0^x\frac{t^n}{t+1}dt\leq\int_0^xt^ndt=\frac{x^{n+1}}{n+1}$ holds when $x\geq0$, and the harder estimate
$\left|\int_0^x\frac{t^n}{1+t}\right|\leq\frac{|x|^{n+1}}{(1+x)(n+1)}$, when $-1\lt x\leq0$,
is given as Problem 11 on page 430. Combining these, you can show that the sequence of remainders converges uniformly to $0$ on $[-r,1]$ for each $r\in(0,1)$.
Lagrange's form of the error term can be used to do this. The estimates, which follow from Taylor's theorem, are also found on Wikipedia. In this case, if $0\lt r\lt 1$, then $|f^{k+1}(x)|\leq \frac{1}{(1-r)^{k+1}}$ whenever $x\geq-r$, so you have the estimate $|r_k(x)|\leq \frac{r^{k+1}}{(1-r)^{k+1}}\frac{1}{(k+1)!}$ for all $x$ in $(-r,r)$, which you can show goes to $0$ (because (k+1)! grows faster than the exponential function $\left(\frac{r}{(1-r)}\right)^{k+1}$), thus showing that the series converges uniformly to $\log(1+x)$ on $(-r,r)$. Since $r$ was arbitrary, this shows that the series converges on $(-1,1)$, and the convergence is uniform on compact subintervals.
This is notationally a little different from the most common version of the Lagrange form of the remainder. For one thing, $R_n(x)$ is usually the remainder when we truncate immediately just after the $(x-a)^n$ term. In this version, $R_n(x)$ seems to denote the remainder when we truncate just after the $(x-a)^{n-1}$ term. No problem, but it may lead to confusion if one looks at other sources.
You ask about the $\vartheta$. Using plain $\vartheta$ is useful, but potentially misleading. Actually, $\vartheta$ is a function of $x$ and $n$. Typically, however, we know little about $\vartheta(n,x)$ apart from the fact that it is between $0$ and $1$. Note that to say that $\vartheta$ is between $0$ and $1$ is precisely the same as saying that $a+\vartheta(x-a)$ is between $a$ and $x$. A more common version uses $f^{(n)}(\xi)$ where $\xi=\xi(n,x)$ is between $a$ and $x$.
As to the intuition, I will not say anything except to note that in the case $n=1$ (so we are truncating at the constant term) the error is $(x-a)f'(\xi)$ for some $\xi$ between $a$ and $x$. This is just the Mean Value Theorem. The Lagrange formula for the remainder is an extended version of the Mean Value Theorem, providing, for $n\gt 1$, a refined estimate of $f(x)$ that takes higher derivatives into account.
As I mentioned in a comment, in the discussion of $e^x$ they are taking $a=0$, so they are using the Maclaurin polynomials for $e^x$. This accounts for the seeming disappearance of $a$.
The limit assertion is less precise than it ought to be. Here is what should be said. If $x$ is positive, then $e^{\vartheta x}\le e^x$, since $\vartheta\lt 1$. If $x\lt 0$, then $e^{\vartheta x}\le 1$.
So if we let $m_x=\max(e^x,1)$, we have
$$|R_n(x)|\le m_x \frac{|x|^n}{n!}.$$
Finally, let $n\to\infty$. For any fixed $x$, we have $\displaystyle\lim_{n\to\infty}\frac{|x|^n}{n!}=0$.
So for any fixed $x$, the error $R_n(x)$ approaches $0$ as $n\to\infty$. Put in other terms, this says that the Maclaurin series for $e^x$ converges to $e^x$ for all $x$.
Best Answer
We have $f(x) = (1-x)^{-\alpha} $ where $\alpha = 1/2$.
Note that
$$|E_{N}| = \left| \frac{\alpha(\alpha-1) \ldots (\alpha - N)x^{N+1}}{(N+1)!}(1-c)^{-\alpha - N -1}\right|\\ = \alpha|x||1-c|^{-\alpha-1}\left| \frac{(\alpha-1) \ldots (\alpha - N)}{(N+1)!}\left(\frac{x}{1-c}\right)^N\right|.$$
Consider
$$a_Nz^N = \frac{(\alpha-1) \ldots (\alpha - N)}{(N+1)!}\left(\frac{x}{1-c}\right)^N.$$
Since $0 < c < x < 1/2$ we have $|z| < 1$ and
$$\lim_{N\to \infty} \frac{|a_{N+1}z^{N+1}|}{|a_N z^N|} = \lim_{N\to \infty} \frac{|\alpha - N - 1|}{N+2}|z| = |z| < 1. $$
By the ratio test $\sum a_Nz^N$ converges and
$$\lim_{N \to \infty}|a_N z^N| = 0.$$
Hence, $|E_{N}| \to 0$ and the Taylor series for $f$ converges.