Suppose that we have started with $w$ white balls and $b$ black balls. Then
\begin{align*}
\mathbb{P}(w_n = w+k)
&= \binom{n}{k} \frac{w(w+1)\dots(w+k-1)b(b+1)\dots(b+n-k-1)}{(w+b)(w+b+1)\dots(w+b+n-1)} \\
&= \frac{1}{B(w, b)} \binom{n}{k} \frac{\Gamma(w+k)\Gamma(b+n-k)}{\Gamma(w+b+n)} \\
&= \frac{1}{B(w, b)} \frac{k^{w-1} (n-k)^{b-1}}{n^{w+b-1}} \frac{E_k(w)E_{n-k}(b)}{E_n(b+w)} ,
\end{align*}
where $\Gamma(\cdot)$ is the gamma function, $B(\alpha, \beta) = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$ is the beta function, and
$$ E_n(z) := \frac{\Gamma(n+z)}{n!n^{z-1}}. $$
Note that $E_n(z) \to 1$ as $n\to\infty$. So, if we write $p_k = k/n$, then the m.g.f. of $X_n$ is explicitly given by
\begin{align*}
\mathbb{E}[e^{\lambda X_n}]
= \frac{1}{B(w, b)} \sum_{k=0}^{n} \exp\biggl( \lambda \frac{p_k + w/n}{1 + (w+b)/n} \biggr) p_k^{w-1}(1 - p_k)^{b-1} \frac{1}{n} \cdot \frac{E_k(w)E_{n-k}(b)}{E_n(b+w)}.
\end{align*}
Letting $n \to \infty$, this converges to
\begin{align*}
\mathbb{E}[e^{\lambda X_{\infty}}]
= \frac{1}{B(w, b)} \int_{0}^{1} e^{\lambda p} p^{w-1}(1 - p)^{b-1} \, \mathrm{d}p.
\end{align*}
From this, we read out that the distribution of $X_{\infty}$ has the density
$$ f(p) = \frac{1}{B(w, b)} p^{w-1}(1 - p)^{b-1} \mathbf{1}_{(0,1)}(p), $$
proving that the limit distribution is $\operatorname{Beta}(w, b)$.
Looks good, as indeed
$$ X_{n+1} = X_n + R_{n+1},$$
where $R_{i}$ denotes the indicator variable that takes value $1$ if color of the $i$-th ball extracted is red, and $0$ if green. By definition we have that the urn contains $X_n$ red and $n+2-X_n$ green balls after $n$ extractions. Then the conditional probability given $X_n$ of a red ball on the $n+1$-th extraction (equal to the conditional expectation given $X_n$ of $R_{n+1}$ that we need) is $$\frac{X_n}{n+2}=Y_n.$$
We also observe that
$$ X_n = 1+\sum_{i=1}^n R_i. $$
Taking expectation we get:
$$ \mathbf{E}\left[ X_n\right] = 1+\sum_{i=1}^n \mathbf{E}\left[ R_i\right].$$
As all $R_i$ have the same distribution as $R_1$, we get:
$$ \mathbf{E}\left[ R_i\right] = \mathbf{E}\left[ R_1\right] =\frac{1}{2},$$
for all $i\in \{1,\ldots , n\}$.
Our indicator variables have the same distribution due to the fact that the sequence of variables $R_1,\ldots, R_n$ is exchangeable, as its joint distribution
$$\mathbf{P}\left(R_1=c_1,\ldots, R_n=c_n\right) $$ $$= \mathbf{P}\left(R_1=c_1\right)\mathbf{P}\left(R_2=c_2 | R_1=c_1\right) \ldots\mathbf{P}\left(R_n=c_n | R_1=c_1,\ldots, R_{n-1}=c_{n-1}\right) $$
$$ = \frac{c!(n-c)!}{(n+1)!}$$ depends on $c_1,\ldots,c_n$ only through the number of red balls $c$, $c=c_1+\ldots + c_n$.
To conclude, we have $\mathbf{E}[X_n]=(n+2)/2$, so
$$\mathbf{E}[Y_n]=1/2.$$
This can also be seen directly as we already know that $Y_n$ is a martingale, so (proof here)
$$\mathbf{E}[Y_n]=\mathbf{E}[Y_{n-1}]=\ldots = \mathbf{E}[Y_1]=1/2.$$
Best Answer
Wikipedia cites a different version of Azuma's inequality: Suppose $(X_n)_n$ is a martingale and $|X_k - X_{k-1}| \le c_k$ almost surely. Then for all $N \in \mathbb{N}$ and all $\varepsilon > 0$, $$P(|X_N - X_0| \ge \varepsilon) \le 2 \exp \left( \frac{-\varepsilon^2}{2 \sum_{k=1}^N c_k^2 }\right).$$
Are you sure you don't have access to a result like that? Anyway I'll show how to move forward assuming Wikipedia's version and you can contemplate whether it helps you find a way forward given what you're allowed to assume.
In this particular martingale, suppose $X_{k-1} = \frac{r}{k+1}$ for some $1 \le r \le k$. Then $X_{k}$ could be $\frac{r+1}{k+2}$ or $\frac{r}{k+2}$ depending on what color ball we pull on turn $k$. Thus we have $$\begin{align} |X_{k} - X_k-1| &\le \max\left \{ \frac{r+1}{k+2} - \frac{r}{k+1}, \frac{r}{k+1} - \frac{r}{k+2} \right\} \\ &= \max \left \{ \frac{k+1 - r}{(k+1)(k+2)}, \frac{r}{(k+1)(k+2)} \right \} \\ &\le \frac{k+1}{(k+1)(k+2)} \\ &= \frac{1}{k+2}. \end{align}$$ Thus we can choose $c_k = \frac{1}{k+2}$, which gives $$\begin{align} \sum_{k=1}^\infty c_k^2 &= \sum_{j=3}^\infty \frac{1}{j^2} = \zeta(2) - 1 - \frac 1 4 = \frac{\pi^2}{6} - \frac{5}{4} \end{align}$$
and so, for any $N$, we have the bound $$\begin{align} P\left(\left|X_N - \frac 1 2\right| \ge \varepsilon \right) &\le 2 \exp \left( \frac{-\varepsilon^2}{2 \sum_{k=1}^N c_k^2 }\right) \\ &= 2 \exp \left( \frac{-\varepsilon^2}{\frac{\pi^2}{3} - \frac{5}{2} }\right) \\ &= 2 \exp \left( \frac{-6 \varepsilon^2}{2 \pi^2 - 15 }\right). \end{align}$$
The final bound works out so neatly that I kinda suspect you're meant to have access to something more like the result I'm quoting. But anyway I'll leave the problem of fully understanding what assumptions are legal up to you.