If $X_i \backsim \operatorname{Gamma}(\alpha,\beta)$ where $\alpha$ is the shape and $\beta$ is the scale parameter then
$$ \mathbb{E}\left[ X_i \right] = \alpha \beta \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[ X_i \right] = \alpha \beta^2 $$
From the properties of the gamma distribution
$$ \overline{X} \backsim \operatorname{Gamma}\left(n \alpha, \beta/n \right) $$
which means
$$ \mathbb{E}\left[ \bar{X}\right] = \alpha\beta \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[\bar{X}\right] = \alpha \beta^2/n $$
Then for
$$ Y_i | X_i \backsim \operatorname{Gamma}\left(\alpha, \beta X_i \right) $$
$$ \mathbb{E}\left[ Y_i | X_i \right] =\alpha \beta X_i \quad \quad \mbox{and} \quad \quad \mathbb{V}\mbox{ar}\left[Y_i | X_i \right] = \alpha (\beta X_i )^2 $$
From the law of total expectation we have
\begin{equation}
\begin{split}
\mathbb{E}\left[\frac{\bar{Y}}{\bar{X}}\right]&= \left.
\mathbb{E}\left[ \mathbb{E}\left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] \\
&=
\mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n\mathbb{E} [ Y_i \big| X_1, \ldots, X_n ] \right] \\
&=
\mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n\mathbb{E}[ Y_i \big| X_i ] \right] \\
& = \mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n \alpha \beta X_i \right] \\
& = \alpha \beta \mathbb{E}\left[ \frac{1}{\bar{X}} \frac{1}{n} \sum_{i=1}^n X_i \right] \\
& = \alpha \beta \mathbb{E}\left[ \frac{1}{\bar{X}} \bar{X} \right] \\
& = \alpha \beta \mathbb{E}\left[ 1 \right] \\
& = \alpha \beta \\
\end{split}
\end{equation}
From the law of total variance we have
\begin{equation*}
\begin{split}
\mathbb{V}\mbox{ar}\left[\frac{\bar{Y}}{\bar{X}}\right] &= \left.
\mathbb{V}\mbox{ar}\left[ \mathbb{E}\left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] + \left. \mathbb{E}\left[ \mathbb{V}\mbox{ar} \left[ \frac{\bar{Y}}{\bar{X}} \right| X_1, \ldots, X_n \right] \right] \\
&= \left.
\mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \mathbb{E}\left[ \sum_{i=1}^nY_i\right| X_1, \ldots, X_n \right] \right] + \left. \mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \mathbb{V}\mbox{ar} \left[ \sum_{i=1}^nY_i \right| X_1, \ldots, X_n \right] \right] \\
&=
\mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \sum_{i=1}^n \mathbb{E}\left[ Y_i\big| X_i\right] \right] + \mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \sum_{i=1}^n \mathbb{V}\mbox{ar} [ Y_i \big| X_i] \right] \\
&= \mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \frac{1}{n} \sum_{i=1}^n \alpha \beta X_i \right] +\mathbb{E}\left[ \frac{1}{\bar{X}^2 } \frac{1}{n^2} \sum_{i=1}^n \alpha (\beta X_i )^2 \right] \\
&= \alpha^2 \beta^2 \mathbb{V}\mbox{ar}\left[ \frac{1}{\bar{X} } \bar{X} \right] +\mathbb{E}\left[ \frac{n^2}{ (\sum_{i=1}^n X_i)^2 } \frac{\alpha \beta^2}{n^2} \sum_{i=1}^n X_i^2 \right] \\
&= \alpha^2 \beta^2 \mathbb{V}\mbox{ar}\left[ 1 \right] + \alpha \beta^2 \mathbb{E}\left[ \frac{1}{ (\sum_{i=1}^n X_i)^2 } \sum_{i=1}^n X_i^2 \right] \\
&= \alpha \beta^2 \mathbb{E}\left[ \frac{ \sum_{i=1}^n X_i^2 }{ (\sum_{i=1}^n X_i)^2 } \right] \\
\end{split}
\end{equation*}
Don't make this so hard for yourself. Simply compute the kernels rather than explicitly integrating. I will change your notation because $\bar X$ is typically used for the sample mean $$\bar X = \frac{1}{n} \sum_{i=1}^n X_i$$ rather than the sample total. Then
$$p(\lambda \mid X, \alpha, \beta) \propto \lambda^n e^{-\lambda n \bar X} \lambda^{\alpha - 1} e^{-\beta \lambda} = \lambda^{n + \alpha - 1} e^{-(n\bar X + \beta)\lambda}$$ which is the kernel of a gamma density with posterior shape hyperparameter $\alpha^* = n + \alpha$ and rate hyperparameter $\beta^* = n \bar X + \beta$, which agrees with your computation (keeping in mind my $\bar X$ differs from yours by a factor of $1/n$). In performing the computation, we discarded any factors that were not functions of $\lambda$. You inadvertently did this by ignoring the fact that your computation resulted in a posterior likelihood which does not integrate to unity; i.e., your expression for $p(\lambda \mid X, \alpha, \beta)$ is not a proper density since the constant terms with respect to $\lambda$ are not the required normalizing factors for a gamma density with shape $\alpha^*$ and rate $\beta^*$.
Next, if we know that the original gamma prior has a normalizing factor of $\beta^\alpha/\Gamma(\alpha)$ since $$p(\lambda, \mid \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} K(\lambda \mid \alpha, \beta)$$ where $K$ is the kernel, and the posterior is gamma with hyperparameters $\alpha^*$, $\beta^*$, it immediately follows that the marginal likelihood is the ratio of the prior normalizing factor divided by the posterior normalizing factor; i.e., $$p(X \mid \alpha, \beta) = \frac{\beta^\alpha/\Gamma(\alpha)}{(\beta^*)^{\alpha^*}/\Gamma(\alpha^*)} = \frac{\beta^\alpha \Gamma(n+\alpha)}{(n\bar X + \beta)^{n + \alpha} \Gamma(\alpha)},$$ because of Bayes' rule: $$p(\lambda \mid X, \alpha, \beta) = \frac{p(X \mid \lambda)p(\lambda \mid \alpha,\beta)}{p(X \mid \alpha, \beta)}.$$ No integration is required. It is important to note that $p(X \mid \alpha, \beta)$ is multivariate with respect to the sample $X = (X_1, \ldots, X_n)$, thus is not itself gamma distributed.
For the posterior predictive distribution, we apply the same principles as described above. First, by Bayes' rule, $$p(x \mid X , \alpha, \beta) = \frac{p(x \mid \lambda) p(\lambda \mid X, \alpha, \beta)}{p(\lambda \mid X, x, \alpha, \beta)}$$ where the denominator is the posterior given the sample $X$ and the new observation $x$, and the numerator is the likelihood; thus the RHS is again the ratio of normalizing factors, but this time the normalizing factors correspond to the posterior density in the numerator, and the posterior plus new observation in the denominator: $$p(x \mid X, \alpha, \beta) = \frac{(\beta^*)^{\alpha^*}/\Gamma(\alpha^*)}{(\beta')^{\alpha'}/\Gamma(\alpha')},$$ where $$\alpha' = \alpha^* + 1 = n+\alpha + 1,$$ and $$\beta' = \beta^* + x = n\bar X + \beta + x.$$ Note that the posterior predictive is a univariate density in the new observation $x$, hence it is instructive to consider its kernel: $$p(x \mid X, \alpha, \beta) \propto \frac{1}{(\beta')^{\alpha'}} = (n \bar X + \beta + x)^{-\alpha'}$$ which is proportional to a Pareto (Type II) density with minimum value parameter $0$, scale parameter $\beta^* = n \bar X + \beta$ and shape parameter $\alpha^* = n + \alpha$.
Best Answer
The posterior is the following measure multiplied by a normalizing constant: $$ \overbrace{(\ell^{x_1} e^{-\ell}) \cdots (\ell^{x_n} e^{-\ell})}^\text{likelihood} {} \cdot {} \overbrace{ (\beta\ell)^{\alpha-1} e^{-\beta\ell} (\beta\,d\ell) }^\text{gamma prior} $$ I've omitted the $\text{“}x_i!\text{''s}$ in the denominators since they don't depend on $\ell$ and similarly the $\Gamma(\alpha)$ in the denominator. The above is proportional, as a function of $\ell,$ to $$ \propto \quad \ell^{x_1+\cdots +x_n +\alpha-1} e^{-(n+\beta)\ell} \, d\ell. \tag 1 $$ This is gamma distribution, with $x_1+\cdots+x_n+\alpha$ in the posterior where $\alpha$ appeared in the prior, and with $n+\beta$ in the posterior where $\beta$ appeared in the prior.
The expression you're trying to simplify looks as if it would appear in an attempt to find the normalizing constant. I wouldn't worry about that until after you've reached line $(1)$ above. In line $(1),$ you've got a gamma distribution but the normalizing constant is not specified. But it is a known function of the two parameters $x_1+\cdots+x_n+\alpha$ and $n+\beta.$