You have a $U(-\theta,\theta)$ population where $\theta\in\mathbb R^+$.
Joint density of the sample $\mathbf X=(X_1,X_2,\ldots,X_n)$ is
\begin{align}
f_{\theta}(\mathbf x)&=\frac{1}{(2\theta)^n}\mathbf1_{-\theta < x_1, \ldots, x_n < \theta}
\\&=\frac{1}{(2\theta)^n}\mathbf1_{0<|x_1|,\ldots,|x_n|<\theta}
\\&=\frac{1}{(2\theta)^n}\mathbf1_{\max_{1\le i\le n}|x_i|<\theta}
\end{align}
It is clear from Factorization theorem that a sufficient statistic for $\theta$ is $$T(\mathbf X)=\max_{1\le i\le n}|X_i|$$
One could verify that $|X_i|\sim U(0,\theta)$, so that the density of $T$ is $$g_{\theta}(t)=\frac{n}{\theta^n}t^{n-1}\mathbf1_{0<t<\theta}$$
That $T$ is a complete statistic for $\theta$ is well-known.
We simply have to find unbiased estimators of the parametric functions of $\theta$ based on the complete sufficient statistic. This would give us the UMVUE by the Lehmann-Scheffe theorem.
As the support of the complete sufficient statistic here depends on the parameter $\theta$, unbiased estimators can be directly obtained through differentiation.
Let $h_1(T)$ and $h_2(T)$ be unbiased estimators of $\theta/(1+\theta)$ and $e^{\theta}/\theta$ respectively, based on the complete sufficient statistic $T$.
That is, for all $\theta>0$,
\begin{align}
\qquad\quad\frac{n}{\theta^n}\int_0^{\theta}h_1(t)t^{n-1}\,dt&=\frac{\theta}{1+\theta}
\\\implies \int_0^{\theta}h_1(t)t^{n-1}\,dt &= \frac{\theta^{n+1}}{n(1+\theta)}
\end{align}
Differentiating both sides wrt $\theta$,
\begin{align}
h_1(\theta)\theta^{n-1}&=\frac{\theta^n(n\theta+n+1)}{n(1+\theta)^2}
\\\implies h_1(\theta) &=\frac{\theta(n\theta+n+1)}{n(1+\theta)^2}
\end{align}
Hence, $$h_1(T)=\frac{T(nT+n+1)}{n(1+T)^2}$$
Similarly for the second problem, for all $\theta>0$,
\begin{align}
\qquad\quad\frac{n}{\theta^n}\int_0^{\theta}h_2(t)t^{n-1}\,dt&=\frac{e^\theta}{\theta}
\\\implies \int_0^{\theta}h_2(t)t^{n-1}\,dt &= \frac{\theta^{n-1} e^\theta}{n}
\end{align}
Differentiating both sides wrt $\theta$ yields
\begin{align}
h_2(\theta)\theta^{n-1}&=\frac{e^{\theta}\theta^{n-2}(\theta+n-1)}{n}
\\\implies h_2(\theta) &=\frac{e^{\theta}(\theta+n-1)}{n\theta}
\end{align}
So, $$h_2(T)=\frac{e^{T}(T+n-1)}{nT}$$
In my initial answer, the following calculation for the UMVUE was rather unnecessary and complicated. Had the support not depended on the parameter, I might have tried this. I am keeping this part in the answer as I might be able to salvage the somewhat faulty argument on some further consideration :
For $k> -n$, we have
\begin{align}
E_\theta(T^k)&=\frac{n}{\theta^n}\int_0^\theta t^{k+n-1}\,dt\\[8pt]
& = \frac{n\theta^k}{n+k}
\end{align}
This suggests that an unbiased estimator of $\theta^k$ based on $T$ is $$\left(\frac{n+k}{n}\right)T^k$$
For the first problem, one could write
\begin{align}
\frac{\theta}{1+\theta}&=
\begin{cases}\left(1+\frac{1}{\theta}\right)^{-1}=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\frac{1}{\theta^3}+\cdots&,\text{ if }\theta>1\\\\\theta(1+\theta+\theta^2+\cdots)&,\text{ if }0<\theta<1\end{cases}
\end{align}
For $0<\theta<1$, we have
$$E_{\theta}\left[\left(\frac{n+1}{n}\right)T+\left(\frac{n+2}{n}\right)T^2+\cdots\right]=\theta+\theta^2+\cdots$$
Or, $$E_{\theta}\left[\sum_{k=1}^\infty\left(\frac{n+k}{n}\right)T^k\right]=\frac{\theta}{1+\theta}$$
For $\theta>1$,
$$E_{\theta}\left[1-\left(\frac{n-1}{n}\right)\frac{1}{T}+\left(\frac{n-2}{n}\right)\frac{1}{T^2}-\cdots\right]=1-\frac{1}{\theta}+\frac{1}{\theta^2}-\cdots$$
That is, $$E_{\theta}\left[\sum_{k=0}^\infty\left(\frac{n-k}{n}\right)\frac{(-1)^k}{T^k}\right]=\frac{\theta}{1+\theta}$$
Hence by Lehmann-Scheffe theorem, UMVUE of $\theta/(1+\theta)$ is
\begin{align}
h_1(T)&=\begin{cases}\displaystyle\sum_{k=1}^\infty\left(\frac{n+k}{n}\right)T^k&,\text{ if }0<\theta<1\\\\\displaystyle\sum_{k=0}^\infty\left(\frac{n-k}{n}\right)\frac{(-1)^k}{T^k}&,\text{ if }\theta\ge1 \end{cases}
\\\\&=\begin{cases}\displaystyle\frac{T(n+1-nT)}{n(T-1)^2}&,\text{ if }0<\theta<1\\\\\displaystyle\frac{T(n+1+nT)}{n(T+1)^2}&,\text{ if }\theta\ge1\end{cases}
\end{align}
However, upon verification of unbiasedness for some values of $n$, it looks like only $$h_1(T)=\displaystyle\frac{T(n+1+nT)}{n(T+1)^2}$$ should be the correct answer for all $\theta>0$. I am not quite sure why that happens.
For the second problem, we can use the power series expansion of $e^\theta$ to obtain
$$E_{\theta}\left[\sum_{k=-1}^{\infty}\left(\frac{n+k}{n}\right)\frac{T^k}{(k+1)!}\right]=\sum_{j=0}^\infty \frac{\theta^{j-1}}{j!}=\frac{e^{\theta}}{\theta}$$
So the UMVUE of $e^{\theta}/\theta$ is
\begin{align}
h_2(T)&=\sum_{k=-1}^{\infty}\left(\frac{n+k}{n}\right)\frac{T^k}{(k+1)!}
\\\\&=\frac{e^T(n-1+T)}{nT}
\end{align}
Best Answer
A sufficient statistic for $(\theta,\gamma)$ as seen here is $\left(\prod\limits_{i=1}^n X_i,X_{(n)}\right)$ or equivalently $\left(\sum\limits_{i=1}^n \ln X_i,\ln X_{(n)}\right)$. This is again equivalent to $\boldsymbol T=\left(\sum\limits_{i=1}^n (\ln X_{(n)}-\ln X_i),\ln X_{(n)}\right)$ as they are all one-to-one functions of each other (in the sense that no information about the unknown parameter is lost going from one to the other).
If you change variables to $Y_i=\ln\left(\frac1{X_i}\right)=-\ln X_i$, it turns out to have a density
\begin{align} f_{Y_i}(y)&=f_{X_i}(e^{-y})\left|\frac{\mathrm d}{\mathrm dy}e^{-y}\right| \\&=\frac{\theta e^{-\theta y}}{\gamma^{\theta}}\mathbf1_{y>\ln(1/\gamma)} \\&=\theta\exp\left\{-\theta\left(y+\ln \gamma\right)\right\}\mathbf1_{y>-\ln\gamma}&;\,\small \theta,\gamma>0 \end{align}
This is a two-parameter exponential distribution with location $-\ln \gamma$ and scale $1/\theta$. In other words, this means $Y_i+\ln \gamma=\ln\left(\frac{\gamma}{X_i}\right)$ is exponential with mean $1/\theta$.
Noting that $Y_{(1)}=-\ln X_{(n)}$, the statistic $\boldsymbol T$ can be written as $$\boldsymbol T=\left(\sum_{i=1}^n (Y_i-Y_{(1)}),- Y_{(1)}\right)=(U,V) $$
That $\boldsymbol T=(U,V)$ is a complete statistic can be seen by comparing to this problem since we know that $Y_1,\ldots,Y_n$ are i.i.d $\text{Exp}\left(-\ln \gamma,\frac1{\theta}\right)$. You can see here that $U=\sum\limits_{i=1}^n (Y_i-Y_{(1)})$ has a certain Gamma distribution (this is the distribution you are asked for). To be precise, this can also be written as $2\theta U\sim \chi^2_{2(n-1)}$ as argued here. As $U$ is a function of a complete sufficient statistic, unbiased estimator of $1/\theta$ based on $U$ is the UMVUE by Lehmann-Scheffe theorem. This can also be done without the distribution of $U$ since one can find $E\left[U\right]=\sum\limits_{i=1}^n E\left[ Y_i\right]-nE\left[Y_{(1)}\right]$ directly in terms of $1/\theta$.