Let $S_i = X_1 + \ldots + X_i$.
$$F_{(S_1, ..., S_n)}(x_1, \ldots, x_n) = \\
\int_{-\infty}^{x_1} f_{X_1}(\tau_1)
\int_{-\infty}^{x_2 - \tau_1} f_{X_2}(\tau_2) \cdots
\int_{-\infty}^{x_n - \tau_1 - ... - \tau_{n - 1}} f_{X_n}(\tau_n)
\, d\tau_n \cdots d\tau_1, \\
\frac {\partial^n} {\partial x_n \cdots \partial x_1}
F_{(S_1, ..., S_n)}(x_1, \ldots, x_n) = \\
f_{X_1}(x_1) \frac {\partial^{n - 1}} {\partial x_n \cdots \partial x_2}
\int_{-\infty}^{x_2 - x_1} f_{X_2}(\tau_2) \cdots
\int_{-\infty}^{x_n - x_1 - \tau_2 - ... - \tau_{n - 1}} f_{X_n}(\tau_n)
\, d\tau_n \cdots d\tau_2 = \ldots = \\
f_{X_1}(x_1) f_{X_2}(x_2 - x_1) \cdots f_{X_n}(x_n - x_{n - 1}).$$
For $f_{X_i}(x) = \lambda e^{-\lambda x} [0 < x]$, this gives
$$f_{(S_1, ..., S_n)}(x_1, \ldots x_n) =
\lambda^n e^{-\lambda x_n} [0 < x_1 < \ldots < x_n].$$
Next,
$$f_{S_n}(x) =
\mathcal L^{-1} {\left[ \left(
\frac \lambda {p + \lambda} \right)^{\!n} \right]} =
\lambda^n e^{-\lambda x} \mathcal L^{-1}[p^{-n}] =
\frac {\lambda^n} {(n - 1)!} x^{n - 1} e^{-\lambda x} \, [0 < x], \\
f_{(S_1, ..., S_{n - 1}) \mid S_n = t}(x_1, \ldots, x_{n - 1}) =
\frac {f_{(S_1, ..., S_n)}(x_1, \ldots, x_{n - 1}, t)} {f_{S_n}(t)} = \\
\frac {(n - 1)!} {t^{n - 1}} \, [0 < x_1 < \ldots < x_{n - 1} < t],$$
which is the same as the pdf of the order statistic $(Y_1^*, \ldots, Y_{n - 1}^*)$.
to solve your problem consider that the posterior is
$$\pi(\theta|\mathbf{x})\propto \pi(\theta)\cdot p(\mathbf{x}|\theta)$$
So first derive your likelihood $ p(\mathbf{x}|\theta)$ and then multiply your (given) prior $\times$ likelihood and surely you will find (the kernel of) another Gamma density with different (posterior) parameters
Hint: when doing your calculations, waste any quantity not depending on $\theta$ because in a Bayesian point of view they are all constants and thus included in the normalizing constant
(1) likelihood
Simply multiply the "n" densities obtaining
$$ p(\mathbf{x}|\theta)\propto \theta^n e^{-\theta \Sigma_i|x_i|}$$
(note that I did not consider the 2 in the denominator, not depending on the parameter)
(2) Prior
$$\pi(\theta)\propto \theta^{\alpha-1} e^{-\beta\theta}$$
(same observation as in (1))
(3) Posterior
$$\pi(\theta|\mathbf{x})\propto \theta^{(\alpha+n)-1}\cdot e^{-\theta(\beta+\Sigma_i|x_i|)} $$
we immediately recognize that the posterior is still Gamma with parameters
$$\pi(\theta|\mathbf{x})\sim \text{Gamma}(\alpha+n;\beta+\Sigma_i|x_i|)$$
Best Answer
Very easy:
if (as it is) $Y=\Sigma_i X_i\sim Gamma(n;\theta)$ then $\frac{1}{Y}\sim \text{Inverse Gamma}$
thus the law of $\frac{n}{Y}$ can be derived immediately by a simple transformation