If $X_i \backsim \operatorname{Gamma}(\alpha,\beta)$ where $\alpha$ is the shape and $\beta$ is the scale parameter then
$$ \mathbb{E}\left[ X_i \right] = \alpha \beta \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[ X_i \right] = \alpha \beta^2 $$
From the properties of the gamma distribution
$$ \overline{X} \backsim \operatorname{Gamma}\left(n \alpha, \beta/n \right) $$
which means
$$ \mathbb{E}\left[ \bar{X}\right] = \alpha\beta \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[\bar{X}\right] = \alpha \beta^2/n $$
Then for
$$ Y_i | X_i \backsim \operatorname{Gamma}\left(\alpha, \beta X_i \right) $$
$$ \mathbb{E}\left[ Y_i | X_i \right] =\alpha \beta X_i \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[Y_i | X_i \right] = \alpha (\beta X_i )^2 $$
From the law of total expectation we have
\begin{equation}
\begin{split}
\mathbb{E}\left[\frac{\bar{Y}}{\bar{X}}\right]&= \left.
\mathbb{E}\left[ \mathbb{E}\left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] \\
&=
\mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n\mathbb{E} [ Y_i \big| X_1, \ldots, X_n ] \right] \\
&=
\mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n\mathbb{E}[ Y_i \big| X_i ] \right] \\
& = \mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n \alpha \beta X_i \right] \\
& = \alpha \beta \mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n X_i \right] \\
& = \alpha \beta \mathbb{E}\left[ \frac{1}{\bar{X}} \bar{X} \right] \\
& = \alpha \beta \mathbb{E}\left[ 1 \right] \\
& = \alpha \beta \\
\end{split}
\end{equation}
From the law of total variance we have
\begin{equation*}
\begin{split}
\mathbb{V}\mbox{ar}\left[\frac{\bar{Y}}{\bar{X}}\right] &= \left.
\mathbb{V}\mbox{ar}\left[ \mathbb{E}\left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] + \left. \mathbb{E}\left[ \mathbb{V}\mbox{ar} \left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] \\
&= \left.
\mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \mathbb{E}\left[ \sum_{i=1}^nY_i\right| X_1, \ldots, X_n \right] \right] + \left. \mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \mathbb{V}\mbox{ar} \left[ \sum_{i=1}^nY_i \right| X_1, \ldots, X_n \right] \right] \\
&=
\mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \sum_{i=1}^n \mathbb{E}\left[ Y_i\big| X_i\right] \right] + \mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \sum_{i=1}^n \mathbb{V}\mbox{ar} [ Y_i \big| X_i] \right] \\
&= \mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \sum_{i=1}^n \alpha \beta X_i \right] +\mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \sum_{i=1}^n \alpha (\beta X_i )^2 \right] \\
&= \alpha^2 \beta^2 \mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \bar{X} \right] +\mathbb{E}\left[ \frac{n^2}{ (\sum_{i=1}^n X_i)^2 } \frac{\alpha \beta^2}{n^2} \sum_{i=1}^n X_i^2 \right] \\
&= \alpha^2 \beta^2 \mathbb{V}\mbox{ar}\left[ 1 \right] + \alpha \beta^2 \mathbb{E}\left[ \frac{1}{ (\sum_{i=1}^n X_i)^2 } \sum_{i=1}^n X_i^2 \right] \\
&= \alpha \beta^2 \mathbb{E}\left[ \frac{ \sum_{i=1}^n X_i^2 }{ (\sum_{i=1}^n X_i)^2 } \right] \\
\end{split}
\end{equation*}
The distribution of $(X_1, X_2)$ is given by $\text{Dir}(\alpha_1, \alpha_2, \sum_{i = 3}^{n} \alpha_i)$ (proof below). Therefore the joint density of $(X_1, X_2)$ is
$$f_{X_1, X_2}(x_1, x_2) = \frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(\sum_{i = 3}^{n}\alpha_i)}x_1^{\alpha_1 - 1}x_2^{\alpha_2 - 1}(1 - x_1 - x_2)^{(\sum_{i = 3}^n \alpha_i) - 1},$$
where $x_1, x_2 > 0$ and $x_1 + x_2 < 1$.
Now we do a transformation of variables: Let $Z = X_1 + X_2$ and $X = X_1$ (I'm using $Z, X$ instead of $Z_1, X_1$ because, later on, the subscripts look messy otherwise). Then
$$f_{X, Z}(x, z) = f_{X_1, X_2}(x_1(x, z), x_2(x, z))\left|\det\left(\frac{d(x_1, x_2)}{d(x, z)} \right) \right|.$$
The Jacobian here is $1$ and so the joint density is
$$f_{X, Z}(x, z) = \frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(\sum_{i = 3}^{n}\alpha_i)}x^{\alpha_1 - 1}(z - x)^{\alpha_2 - 1}(1 - z)^{(\sum_{i = 3}^n \alpha_i) - 1},$$
where $x > 0$, $z > x$ and $z < 1$.
The marginal density of $Z$ is $\text{Dir}(\alpha_1 + \alpha_2, \sum_{i = 3}^{n} \alpha_i)$, so the conditional density of $X$ given $Z$ is
$$f_{X \mid Z}(x \mid z) = \frac{f_{X, Z}(x, z)}{f_{Z}(z)} = \frac{\frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(\sum_{i = 3}^{n}\alpha_i)}x^{\alpha_1 - 1}(z - x_1)^{\alpha_2 - 1}(1 - z)^{(\sum_{i = 3}^n \alpha_i) - 1}}{\frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1} + \alpha_{2})\Gamma(\sum_{i = 3}^{n}\alpha_i)}z^{\alpha_1 + \alpha_2 - 1}(1 - z)^{(\sum_{i = 3}^n \alpha_i) - 1}},$$
which simplifies to
$$f_{X \mid Z}(x \mid z) = \frac{1}{z}\frac{\Gamma(\alpha_1 + \alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\left(\frac{x}{z}\right)^{\alpha_1 - 1}\left(1 - \frac{x}{z}\right)^{\alpha_2 - 1}.$$
Proof that the distribution of $(X_1, X_2)$ is given by $\text{Dir}(\alpha_1, \alpha_2, \sum_{i = 3}^{n} \alpha_i)$:
Let $I(x, k, n) = \{(x_k, \dots, x_n): x_k, \dots, x_n > 0, \sum_{i = k}^n x_i = x\}$. We know that $X_1 \sim \text{Beta}(\alpha_1, \sum_{i = 2}^{n} \alpha_i)$. Therefore
$$\int_{I(1 - x_1, 2, n)} f_{X_1, \dots, X_n}(x_1, \dots, x_n) dx_2\cdots dx_n = \frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1})\Gamma(\sum_{i = 2}^{n}\alpha_i)}x_1^{\alpha_1 - 1}(1 - x_1)^{(\sum_{i = 2}^n \alpha_i) - 1}.$$
Hence
$$\int_{I(1 - x_1, 2, n)} x_2^{\alpha_2 - 1}\cdots x_n^{\alpha_n - 1} dx_2\cdots dx_n = \frac{\Gamma(\alpha_2)\cdots\Gamma(\alpha_n)}{\Gamma(\sum_{i = 2}^n \alpha_i)}(1 - x_1)^{(\sum_{i = 2}^n \alpha_i) - 1}.$$
Therefore
$$f_{X_1, X_2}(x_1, x_2) = \frac{\Gamma(\sum_{i = 1}^n \alpha_i)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_n)}x_1^{\alpha_1 - 1}x_2^{\alpha_2 - 1} \int_{I(1 - x_1 - x_2, 3, n)} x_3^{\alpha_3 - 1}\cdots x_n^{\alpha_n - 1} dx_2\cdots dx_n$$
$$= \frac{\Gamma(\sum_{i = 1}^{n}\alpha_i)}{\Gamma(\alpha_{1})\Gamma(\alpha_2)\Gamma(\sum_{i = 3}^{n}\alpha_i)}x_1^{\alpha_1 - 1}x_2^{\alpha_2 - 1}(1 - x_1 - x_2)^{(\sum_{i = 3}^n \alpha_i) - 1}$$
Best Answer
Your mistake is conflating $X_1 + X_2$ with $2X_1$.
In $X_1 + X_2$, the two random variables are independent, but in $2X_1$ you are taking the sum of two copies of $X_1$ which can be viewed as "completely dependent."
Side note: Another simple way to point out that the two are different: $\text{Var}(X_1+X_2)=\text{Var}(X_1) + \text{Var}(X_2) = 2 \text{Var}(X_1)$, but $\text{Var}(2X_1) = 4\text{Var}(X_1)$.