Well, if you find a statistic $T=t(S)$ that is a function of $S=\sum_{i=1}^{n} X_i$ and which is an unbiased estimator of $\lambda$, you've got your UMVUE by Lehmann-Scheffe.
For $T$ to be an unbiased estimator we want: $E[(t(S))]=\lambda$, of course we know that for a $X\tilde{} exp(\lambda)$ we already have $E[X]=\lambda$ and in general $E[\bar X]=E[X]$.
So, we see that $\bar X$ is an unbiased estimator of $\lambda$ and because $\bar X$ is an one-to-one function of $S$ (by $\bar X=S/n$), we're done.
Suppose $\theta$ is the unknown quantity of interest. A necessary and sufficient condition for an unbiased estimator (assuming one exists) of some parameteric function $g(\theta)$ to be UMVUE is that it must be uncorrelated with every unbiased estimator of zero (assuming of course the unbiased estimator has finite second moment). We can use this result to prove uniqueness of UMVUE whenever it exists.
If possible, suppose $T_1$ and $T_2$ are both UMVUEs of $g(\theta)$.
Then $T_1-T_2$ is an unbiased estimator of zero, so that by the result above we have
$$\operatorname{Cov}_{\theta}(T_1,T_1-T_2)=0\quad,\,\forall\,\theta$$
Or, $$\operatorname{Var}_{\theta}(T_1)=\operatorname{Cov}_{\theta}(T_1,T_2)\quad,\,\forall\,\theta$$
Therefore, $$\operatorname{Corr}_{\theta}(T_1,T_2)=\frac{\operatorname{Cov}_{\theta}(T_1,T_2)}{\sqrt{\operatorname{Var}_{\theta}(T_1)}\sqrt{\operatorname{Var}_{\theta}(T_2)}}=\sqrt\frac{\operatorname{Var}_{\theta}(T_1)}{\operatorname{Var}_{\theta}(T_2)}\quad,\,\forall\,\theta$$
Since $T_1$ and $T_2$ have the same variance by assumption, correlation between $T_1$ and $T_2$ is exactly $1$. In other words, $T_1$ and $T_2$ are linearly related, i.e. for some $a,b(\ne 0)$, $$T_1=a+bT_2 \quad,\text{ a.e. }$$
Taking variance on both sides of the above equation gives $b^2=1$, or $b=1$ ($b=-1$ is invalid because that leads to $T_1=2g(\theta)-T_2$ a.e. on taking expectation, which cannot be true as $T_1,T_2$ do not depend on $\theta$). So $T_1=a+T_2$ a.e. and that leads to $a=0$ on taking expectation. Thus, $$T_1=T_2\quad,\text{ a.e. }$$
Best Answer
Assuming independence between $X_i $ and $Y_j$ the calculation of the following expectation
$$\mathbb{E}\left[\frac{\overline{X}_n}{\overline{Y}_m} \right]$$
(I used $n,m$ instead of $n_1,n_2$ to simplify the notation...)
is very easy being
$$\mathbb{E}\left[\frac{\overline{X}_n}{\overline{Y}_m} \right]=\frac{m}{n}\mathbb{E}\left[\frac{\Sigma_iX_i}{\Sigma_iY_i} \right]=\frac{m}{n}\mathbb{E}\left[\Sigma_iX_i\cdot\frac{1}{\Sigma_iY_i} \right]=\frac{m}{n}\mathbb{E}\left[\Sigma_iX_i\right]\cdot\mathbb{E}\left[\frac{1}{\Sigma_iY_i} \right]$$
and as known
$$\frac{1}{\Sigma_iY_i}\sim \text{Inverse Gamma}$$