Plugging in the definition, $X_1$ following $\operatorname{Gamma}(a+b,1)$ means its density is
$$f_{X_1}(x_1) = \frac1{ \Gamma(a+b)}\, x_1^{a+b-1} e^{-x_1} \qquad \text{for}~~ 0 < x_1 < \infty $$
The density of $X_2$ is
$$f_{X_2}(x_2) = \frac{ \Gamma(a+b) }{ \Gamma(a) \Gamma(b) }\, x_2^{a-1} (1 - x_2)^{b-1} \qquad \text{for}~~ 0<x_2<1$$
The fact that $X_1 \perp X_2$ means their joint density is just the direct product
$$f_{X_1X_2}(x_1,\, x_2) = \frac{ x_1^{a+b-1} e^{-x_1} \, x_2^{a-1} (1 - x_2)^{b-1} }{ \Gamma(a) \Gamma(b) } \qquad \text{for}~~ \begin{cases}
0<x_1<\infty \\
0<x_2<1 \end{cases} \tag{1}\label{joint density}$$
The 2-dim transformation is
$$\begin{cases}
Y_1 = X_1 X_2 \\[1.5ex]
Y_2 = X_1 (1 - X_2)
\end{cases} \Longleftrightarrow \begin{cases}
X_1 = Y_1 + Y_2 \\[2ex]
X_2 = \dfrac{ Y_1 }{ Y_1 + Y_2}
\end{cases} \qquad \text{where}~~ \begin{cases}
0<y_1<\infty \\
0<y_2<\infty \end{cases}$$
with the Jacobian (of the inverse mapping) as
$$J = \left| \begin{matrix} \dfrac{ \partial x_1}{ \partial y_1} & \dfrac{ \partial x_1}{ \partial y_2} \\
\dfrac{ \partial x_2}{ \partial y_1} & \dfrac{ \partial x_2}{ \partial y_2}\end{matrix} \right| = \left| \begin{matrix} 1 & 1 \\
\dfrac{ y_2 }{ (y_1 +y_2)^2 } & \dfrac{ -y_1 }{ (y_1 +y_2)^2 } \end{matrix} \right| = \frac{-1}{ y_1 + y_2 }$$
The transformed joint density for $Y_1$ and $Y_2$ is
\begin{align}
f_{Y_1Y_2}( y_1 ,~y_2 ) &= |J| \cdot f_{X_1X_2}( x_1,\, x_2)\Bigg|_{\substack{x_1 = y_1+y_2 \\ x_2 = \frac{y_1}{y_1 + y_2}}} \qquad \text{, plug in Eq.(\ref{joint density})}\\
&= \frac1{ y_1 + y_2} \cdot \frac{ (y_1 + y_2)^{a+b-1} e^{-(y_1 + y_2)} }{ \Gamma(a) \Gamma(b) }\, \left(\frac{y_1}{ y_1 + y_2}\right)^{a-1} \left(\frac{y_2}{ y_1 + y_2} \right)^{b-1} \\
&= \frac1{ \Gamma(a) \Gamma(b) }\, y_1^{a-1} y_2^{b-1} e^{-(y_1 + y_2)} \qquad \text{for}~~0<y_1<\infty ,~0<y_2<\infty
\end{align}
The marginal density of $Y_1$ can be obtained from the joint as
\begin{align}
f_{Y_1}(y_1) &= \int_{y_2 = 0}^{\infty} f_{Y_1Y_2}( y_1 ,~y_2 ) \,\mathrm{d}y_2 \\
&= \frac1{\Gamma(a)} y_1^{a-1} e^{-y_1} \int_{y_2 = 0}^{\infty} \frac1{\Gamma(b)} y_2^{b-1} e^{-y_2} \,\mathrm{d}y_2 \qquad \small\begin{aligned}[c]
&\text{integral is just the} \\
&\text{kernel of Gamma distribution}\end{aligned} \\
&= \frac1{\Gamma(a)} y_1^{a-1} e^{-y_1}
\end{align}
Thus one identifies the distribution of $Y_1$ as $\operatorname{Gamma}(a,1)$.
Similarly, or noting the symmetry in the joint $f_{Y_1Y_2}( y_1 ,~y_2 )$, we have $Y_2$ follows $\operatorname{Gamma}(b,1)$.
Smiley is right.
Go for finding e.g.:$$P(X_1<X_2)=1-P(X_1\geq X_2)=1-\int_0^\infty P(X_1\geq x)f_{X_2}(x) \, dx=1-\int_0^\infty e^{-x}f_{X_2}(x) \, dx$$
(and leave $X_2-X_1$ for what it is!)
Best Answer
Standard proof:
Let $Y_1 = X_1 + X_2 \sim Gamma(\alpha_1+\alpha_2,\beta)$ and $Y_2=\frac{X_1}{X_1+X_2} \sim Beta(\alpha_1, \beta)$.
Then
$X_1 = Y_1\cdot Y_2 = v_1(Y_1,Y_2)$,
$X_2 = Y_1 - X_1 = Y_1 - Y_1\cdot Y_2 = Y_1(1-Y_2) = v_2(X_1,X_2)$.
By the change-of-variables theorem, we have that $$ f_Y(y_1,y_2) = |J_v| f_X\circ v(y_1,y_2) = |J_v| f_{X_1}(y_1\cdot y_2) f_{X_2}(y_1\cdot(1-y_2)) $$
Computing the jacobian is straightforward:
$$ |J_v| = \begin{vmatrix} \frac{d v_1}{dy_1} & \frac{d v_1}{dy_2} \\ \frac{d v_2}{dy_1} & \frac{d v_2}{dy_2} \end{vmatrix} = \begin{vmatrix} y_2 & y_1 \\ 1-y_2 & -y_1 \end{vmatrix} = |-y_1 y_2 - y_1(1-y_2)| = |-y_1| \stackrel{(1)}{=} y_1 $$
(1) since the support of a Gamma is positive
so we have that
$$ f_Y(y_1,y_2) = y_1 f_{X_1}(y_1\cdot y_2) f_{X_2}(y_1\cdot(1-y_2)) = y_1 \frac{1}{\Gamma(\alpha_1)} \frac{1}{\beta^{\alpha_1}} (y_1\cdot y_2)^{\alpha_1-1} e^{-y_1\cdot y_2 / \beta} \frac{1}{\Gamma(\alpha_2)} \frac{1}{\beta^{\alpha_2}} (y_1\cdot(1-y_2))^{\alpha_2-1} e^{- (y_1\cdot(1-y_2)) / \beta} \stackrel{(2)}{\approx} y_1^{1 + (\alpha_1 - 1) + (\alpha_2 - 1)} y_2^{\alpha_1-1} (1-y_2)^{\alpha_2-1} e^{\frac{y_1y_2}{\beta} + \frac{y_1(1-y_2)}{\beta}} = (y_1^{\alpha_1 + \alpha_2 - 1} e^{\frac{y_1}{\beta}}) (y_2^{\alpha_1-1} (1-y_2)^{\alpha_2-1}) \approx f_{Y_1}(y_1) f_{Y_2}(y_2) $$
(2) we can forget about the constants - everything must integrate to one so they will coincide.
So $f_Y(y_1,y_2) = f_{Y_1}(y_1) f_{Y_2}(y_2)$ ie they are independent.