You leave it to be inferred by us based only on conventions that $\alpha$, rather than $\lambda$ is the shape parameter. There's also the question of whether $\lambda$ is supposed to be the intensity parameter, so that the distribution is
$$
\frac{1}{\Gamma(\alpha)}\cdot (\lambda x)^{\alpha-1} e^{-\lambda x}\,(\lambda\,dx)\text{ for }x>0
$$
or its reciprocal, the scale parameter, so that the distribution is
$$
\frac{1}{\Gamma(\alpha)}\cdot (x/\lambda)^{\alpha-1} e^{-x/\lambda}\,(dx/\lambda)\text{ for }x>0.
$$
I'm going to guess that you mean the first of these two alternatives.
Once you understand the integral that defines the Gamma function, you're almost done. You have
$$
\begin{align}
\mathbb E(X^r) & = \int_0^\infty x^r \frac{1}{\Gamma(\alpha)}\cdot (\lambda x)^{\alpha-1} e^{-\lambda x}\,(\lambda\,dx) \\[12pt]
& = \frac{1}{\Gamma(\alpha)} \cdot\frac{1}{\lambda^r} \int_0^\infty (\lambda x)^{r+\alpha-1} e^{-\lambda x}\,(\lambda\,dx) \\[12pt]
& = \frac{1}{\Gamma(\alpha)}\cdot\frac{1}{\lambda^r} \int_0^\infty u^{r+\alpha-1} e^{-u}\,du \\[12pt]
& = \frac{1}{\Gamma(\alpha)}\cdot\frac{1}{\lambda^r}\cdot\Gamma(r+\alpha). \tag1
\end{align}
$$
This bears simplification. We have
$$
\begin{align}
\Gamma(r+\alpha) & = (r-1)\Gamma(r-1+\alpha)= (r-1)(r-2)\Gamma(r-2+\alpha)=\cdots \\[12pt]
& \cdots=(r+\alpha-1)(r+\alpha-2)(r+\alpha-3)\cdots(r+\alpha-r)\Gamma(\alpha).
\end{align}
$$
Then you can reduce the fraction in $(1)$.
Plugging in the definition, $X_1$ following $\operatorname{Gamma}(a+b,1)$ means its density is
$$f_{X_1}(x_1) = \frac1{ \Gamma(a+b)}\, x_1^{a+b-1} e^{-x_1} \qquad \text{for}~~ 0 < x_1 < \infty $$
The density of $X_2$ is
$$f_{X_2}(x_2) = \frac{ \Gamma(a+b) }{ \Gamma(a) \Gamma(b) }\, x_2^{a-1} (1 - x_2)^{b-1} \qquad \text{for}~~ 0<x_2<1$$
The fact that $X_1 \perp X_2$ means their joint density is just the direct product
$$f_{X_1X_2}(x_1,\, x_2) = \frac{ x_1^{a+b-1} e^{-x_1} \, x_2^{a-1} (1 - x_2)^{b-1} }{ \Gamma(a) \Gamma(b) } \qquad \text{for}~~ \begin{cases}
0<x_1<\infty \\
0<x_2<1 \end{cases} \tag{1}\label{joint density}$$
The 2-dim transformation is
$$\begin{cases}
Y_1 = X_1 X_2 \\[1.5ex]
Y_2 = X_1 (1 - X_2)
\end{cases} \Longleftrightarrow \begin{cases}
X_1 = Y_1 + Y_2 \\[2ex]
X_2 = \dfrac{ Y_1 }{ Y_1 + Y_2}
\end{cases} \qquad \text{where}~~ \begin{cases}
0<y_1<\infty \\
0<y_2<\infty \end{cases}$$
with the Jacobian (of the inverse mapping) as
$$J = \left| \begin{matrix} \dfrac{ \partial x_1}{ \partial y_1} & \dfrac{ \partial x_1}{ \partial y_2} \\
\dfrac{ \partial x_2}{ \partial y_1} & \dfrac{ \partial x_2}{ \partial y_2}\end{matrix} \right| = \left| \begin{matrix} 1 & 1 \\
\dfrac{ y_2 }{ (y_1 +y_2)^2 } & \dfrac{ -y_1 }{ (y_1 +y_2)^2 } \end{matrix} \right| = \frac{-1}{ y_1 + y_2 }$$
The transformed joint density for $Y_1$ and $Y_2$ is
\begin{align}
f_{Y_1Y_2}( y_1 ,~y_2 ) &= |J| \cdot f_{X_1X_2}( x_1,\, x_2)\Bigg|_{\substack{x_1 = y_1+y_2 \\ x_2 = \frac{y_1}{y_1 + y_2}}} \qquad \text{, plug in Eq.(\ref{joint density})}\\
&= \frac1{ y_1 + y_2} \cdot \frac{ (y_1 + y_2)^{a+b-1} e^{-(y_1 + y_2)} }{ \Gamma(a) \Gamma(b) }\, \left(\frac{y_1}{ y_1 + y_2}\right)^{a-1} \left(\frac{y_2}{ y_1 + y_2} \right)^{b-1} \\
&= \frac1{ \Gamma(a) \Gamma(b) }\, y_1^{a-1} y_2^{b-1} e^{-(y_1 + y_2)} \qquad \text{for}~~0<y_1<\infty ,~0<y_2<\infty
\end{align}
The marginal density of $Y_1$ can be obtained from the joint as
\begin{align}
f_{Y_1}(y_1) &= \int_{y_2 = 0}^{\infty} f_{Y_1Y_2}( y_1 ,~y_2 ) \,\mathrm{d}y_2 \\
&= \frac1{\Gamma(a)} y_1^{a-1} e^{-y_1} \int_{y_2 = 0}^{\infty} \frac1{\Gamma(b)} y_2^{b-1} e^{-y_2} \,\mathrm{d}y_2 \qquad \small\begin{aligned}[c]
&\text{integral is just the} \\
&\text{kernel of Gamma distribution}\end{aligned} \\
&= \frac1{\Gamma(a)} y_1^{a-1} e^{-y_1}
\end{align}
Thus one identifies the distribution of $Y_1$ as $\operatorname{Gamma}(a,1)$.
Similarly, or noting the symmetry in the joint $f_{Y_1Y_2}( y_1 ,~y_2 )$, we have $Y_2$ follows $\operatorname{Gamma}(b,1)$.
Best Answer
There is no such thing as an alpha function, and there is no such thing as an alpha distribution. That we have beta and gamma distributions is mere coincidence, just like how Gamow was too playful and inserted Bethe's name into a paper he only co-authored with Alpher.
However, there are the Lévy alpha-stable distributions on four parameters, one of which is $\alpha$, which controls the most important stability parameter of the distribution.