Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$.
The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, and the integrated risk is
\begin{gather}
r_n = \int^\infty_0 (\lambda-\delta_n)^2 Poi(x|\lambda)Ga(\lambda|a_n,b_n)d\lambda
\end{gather}
which reduces to
\begin{gather}
r_n = \delta_n^2 - 2\delta_n E_n\lambda + E_n\lambda^2
\end{gather}
\begin{gather}
= \delta_n^2 - 2\delta_n E_n\lambda + V_n\lambda +(E_n\lambda)^2
\end{gather}
\begin{gather}
= \delta_n^2 - 2\delta_n^2 + \delta_n(b_n+1) + \delta_n^2
\end{gather}
\begin{gather}
= \delta_n(b_n+1).
\end{gather}
The limiting Bayes estimator is
\begin{gather}
\delta = \lim_{n\rightarrow\infty}\delta_n=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1) = x
\end{gather}
and the limiting integrated risk is
\begin{gather}
r = \lim_{n\rightarrow\infty}\delta_n(b_n+1)=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1)^2=x
\end{gather}
The limiting rule $\delta$ is Bayes with respect to the improper prior $\pi_\infty$. Because the integrated risk $r$ is constant for all $\lambda$, the estimator $\delta = x$ is minimax.
In order to find a Bayesian Estimator for a loss function, we need to minimize the posterior expected loss $E[loss(\theta, \hat{\theta})|x]$, i.e., solve the optimization problem $\underset{\hat{\theta}}{min}\int\limits_{0}^{\infty}loss(\theta,\hat{\theta})p(\theta|x)d\theta$.
(1) We have to solve the optimization problem $\underset{\hat{\theta}}{min}\int\limits_{0}^{\infty}(\theta-\hat{\theta})^2p(\theta|x)d\theta$, for SEL $loss(\hat{\theta}, \theta)=(\theta-\hat{\theta})^2$.
First we need to compute the full posterior, given
the likelihood $L(\theta|x)=p(x|\theta)=\prod\limits_{i=1}^{n}\dfrac{1}{\Gamma(1)\theta}e^{-x_i/\theta}=\dfrac{1}{\theta^n}e^{-n\sum\limits_{i=1}^{n}x_i/{\theta}}=\dfrac{e^{-n^2\bar{x}/\theta}}{\theta^n}$, where $\bar{x}=\dfrac{\sum\limits_{i=1}^{n}x_i}{n}$
The Fisher Information $I(\theta)=-E[l^{\prime\prime}(\theta)]=-E\left[-2\dfrac{n\sum\limits_{i=1}^{n}{x_i}}{\theta^3}+\dfrac{n}{\theta^2}\right]=\dfrac{n(2n-1)}{\theta^2}$, with $E[X_i]=\theta$,
where we have the loglikelihood $l=-\dfrac{n\sum\limits_{i=1}^{n}{x_i}}{\theta}-n.ln(\theta)$
Hence. the prior $p(\theta)=\dfrac{\sqrt{n(2n-1)}}{\theta}$ (Jeffrey's prior: the square root of the Fisher information $I(\theta)$)
$\therefore p(x)$
$=\int\limits_{0}^{\infty}p(x|\theta)p(\theta)d\theta=\sqrt{n(2n-1)}\int\limits_{0}^{\infty}\dfrac{e^{-n^2\bar{x}/\theta}}{\theta^{n+1}}d\theta$
$=\dfrac{\sqrt{n(2n-1)}}{(n^2\bar{x})^n}\int\limits_{0}^{\infty}e^{-y}y^{n-1}dy$, with substitution $y=\dfrac{n^2\bar{x}}{\theta}$
$\implies p(x)=\dfrac{\Gamma(n)\sqrt{n(2n-1)}}{(n^2\bar{x})^n}$
By Bayes theorem, the posterior $p(\theta|x)=\dfrac{p(x|\theta)p(\theta)}{p(x)}=\dfrac{(n^2\bar{x})^ne^{-n^2\bar{x}/\theta}}{\Gamma(n)\theta^{n+1}}$
At minimum, we have $\dfrac{\partial}{\partial \hat{\theta}}\int\limits_{0}^{\infty}(\theta-\hat{\theta})^2p(\theta|x)d\theta=\int\limits_{0}^{\infty}-2(\theta-\hat{\theta})p(\theta|x)d\theta=0$
$\implies \hat{\theta}_{SEL}=\int\limits_{0}^{\infty}\theta p(\theta|x)d\theta=\int\limits_{0}^{\infty}\dfrac{(n^2\bar{x}/\theta)^ne^{-n^2\bar{x}/\theta}}{\Gamma(n)}d\theta=\dfrac{n^2\bar{x}}{\Gamma(n)}\int\limits_{0}^{\infty}e^{-y}y^{n-2}dy=n^2\bar{x}\dfrac{\Gamma(n-1)}{\Gamma(n)}$
$\implies \hat{\theta}_{SEL}=\dfrac{n^2\bar{x}}{n-1}$, since $\Gamma(n)=(n-1)\Gamma(n-1)$
Please double-check the above calculation, in case I have done a calculation mistake somehwhere (I have a feeling that I may have an extra $n$ in the Bayesian estimator) please point it out.
We can follow the exactly same steps to compute the Bayesian estimator for (2), with a different loss function $loss(\hat{\theta},\theta)=(\hat{\theta}/\theta-1)^2$.
Best Answer
Part 1
You want to minimize the posterior expectation of the loss.
For the squared loss, you want to minimize $E\left[(\hat\theta - \theta_{posterior})^2 \right]$.
Part 2
For the generalized loss you want to minimize $E\left[(\hat\theta - \theta)^2 / \theta \right]$. This you can compute by considering the integration formula for expectations with gamma functions
$$\begin{array}{} E\left[ (a-x)^2/x \right] &=& \int_0^\infty (a-x)^2/x \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x} dx \\ &=& \int_0^\infty (a-x)^2 \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-2}e^{-\beta x} dx \\ &=& \beta \frac{\Gamma(\alpha-1)}{\Gamma(\alpha)} \int_0^\infty (a-x)^2 \frac{\beta^{\alpha-1}}{\Gamma(\alpha-1)}x^{\alpha-2}e^{-\beta x} dx \\ \end{array}$$
and so you can compute the expectation $E\left[ (a-x)^2/x \right]$ for a gamma function as the expectation $E\left[ (a-x)^2 \right]$ for another gamma function where the parameter $\alpha$ is one less (and multiply by a constant $\beta \frac{\Gamma(\alpha-1)}{\Gamma(\alpha)} = \beta/(\alpha-1)$). So like above for the squared loss, the estimate is again the same as the mean of the posterior gamma distribution, but with $\alpha$ one less (this happens to be the maximum/mode of the posterior gamma distribution).