Your reasoning is mostly correct.
The joint density of the sample $(X_1,X_2,\ldots,X_n)$ is
\begin{align}
f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{\theta^n}{\left(\prod_{i=1}^n (1+x_i)\right)^{1+\theta}}\mathbf1_{x_1,x_2,\ldots,x_n>0}\qquad,\,\theta>0
\\\\\implies \ln f_{\theta}(x_1,x_2,\ldots,x_n)&=n\ln(\theta)-(1+\theta)\sum_{i=1}^n\ln(1+x_i)+\ln(\mathbf1_{\min_{1\le i\le n} x_i>0})
\\\\\implies\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)&=\frac{n}{\theta}-\sum_{i=1}^n\ln(1+x_i)
\\\\&=-n\left(\frac{\sum_{i=1}^n\ln(1+x_i)}{n}-\frac{1}{\theta}\right)
\end{align}
Thus we have expressed the score function in the form
$$\frac{\partial}{\partial \theta}\ln f_{\theta}(x_1,x_2,\ldots,x_n)=k(\theta)\left(T(x_1,x_2,\ldots,x_n)-\frac{1}{\theta}\right)\tag{1}$$
, which is the equality condition in the Cramér-Rao inequality.
It is not difficult to verify that $$E(T)=\frac{1}{n}\sum_{i=1}^n\underbrace{E(\ln(1+X_i))}_{=1/\theta}=\frac{1}{\theta}\tag{2}$$
From $(1)$ and $(2)$ we can conclude that
- The statistic $T(X_1,X_2,\ldots,X_n)$ is an unbiased estimator of $1/\theta$.
- $T$ satisfies the equality condition of the Cramér-Rao inequality.
These two facts together imply that $T$ is the UMVUE of $1/\theta$.
The second bullet actually tells us that variance of $T$ attains the Cramér-Rao lower bound for $1/\theta$.
Indeed, as you have shown,
$$E_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=-\frac{1}{\theta^2}$$
This implies that the information function for the whole sample is $$I(\theta)=-nE_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(X_1)\right]=\frac{n}{\theta^2}$$
So the Cramér-Rao lower bound for $1/\theta$ and hence the variance of the UMVUE is
$$\operatorname{Var}(T)=\frac{\left[\frac{d}{d\theta}\left(\frac{1}{\theta}\right)\right]^2}{I(\theta)}=\frac{1}{n\theta^2}$$
Here we have exploited a corollary of the Cramér-Rao inequality, which says that for a family of distributions $f$ parametrised by $\theta$ (assuming regularity conditions of CR inequality to hold), if a statistic $T$ is unbiased for $g(\theta)$ for some function $g$ and if it satisfies the condition of equality in the CR inequality, namely $$\frac{\partial}{\partial\theta}\ln f_{\theta}(x)=k(\theta)\left(T(x)-g(\theta)\right)$$, then $T$ must be the UMVUE of $g(\theta)$. So this argument does not work in every problem.
Alternatively, using the Lehmann-Scheffe theorem you could say that $T=\frac{1}{n}\sum_{i=1}^{n} \ln(1+X_i)$ is the UMVUE of $1/\theta$ as it is unbiased for $1/\theta$ and is a complete sufficient statistic for the family of distributions. That $T$ is compete sufficient is clear from the structure of the joint density of the sample in terms of a one-parameter exponential family. But variance of $T$ might be a little tricky to find directly.
The Poisson distribution is a one-parameter exponential family distribution, with natural sufficient statistic given by the sample total $T(\mathbf{x}) = \sum_{i=1}^n x_i$. The canonical form is:
$$p(\mathbf{x}|\theta) = \exp \Big( \ln (\theta) T(\mathbf{x}) - n\theta \Big) \cdot h(\mathbf{x}) \quad \quad \quad h(\mathbf{x}) = \coprod_{i=1}^n x_i! $$
From this form it is easy to establish that $T$ is a complete sufficient statistic for the parameter $\theta$. So the Lehmann–Scheffé theorem means that for any $g(\theta)$ there is only one unbiased estimator of this quantity that is a function of $T$, and this is the is UMVUE of $g(\theta)$. One way to find this estimator (the method you are using) is via the Rao-Blackwell theorem --- start with an arbitrary unbiased estimator of $g(\theta)$ and then condition on the complete sufficient statistic to get the unique unbiased estimator that is a function of $T$.
Using Rao-Blackwell to find the UMVUE: In your case you want to find the UMVUE of:
$$g(\theta) \equiv \theta \exp (-\theta).$$
Using the initial estimator $\hat{g}_*(\mathbf{X}) \equiv \mathbb{I}(X_1=1)$ you can confirm that,
$$\mathbb{E}(\hat{g}_*(\mathbf{X})) = \mathbb{E}(\mathbb{I}(X_1=1)) = \mathbb{P}(X_1=1) = \theta \exp(-\theta) = g(\theta),$$
so this is indeed an unbiased estimator. Hence, the unique UMVUE obtained from the Rao-Blackwell technique is:
$$\begin{equation} \begin{aligned}
\hat{g}(\mathbf{X})
&\equiv \mathbb{E}(\mathbb{I}(X_1=1) | T(\mathbf{X}) = t) \\[6pt]
&= \mathbb{P}(X_1=1 | T(\mathbf{X}) = t) \\[6pt]
&= \mathbb{P} \Big( X_1=1 \Big| \sum_{i=1}^n X_i = t \Big) \\[6pt]
&= \frac{\mathbb{P} \Big( X_1=1 \Big) \mathbb{P} \Big( \sum_{i=2}^n X_i = t-1 \Big)}{\mathbb{P} \Big( \sum_{i=1}^n X_i = t \Big)} \\[6pt]
&= \frac{\text{Pois}(1| \theta) \cdot \text{Pois}(t-1| (n-1)\theta)}{\text{Pois}(t| n\theta)} \\[6pt]
&= \frac{t!}{(t-1)!} \cdot \frac{ \theta \exp(-\theta) \cdot ((n-1) \theta)^{t-1} \exp(-(n-1)\theta)}{(n \theta)^t \exp(-n\theta)} \\[6pt]
&= t \cdot \frac{ (n-1)^{t-1}}{n^t} \\[6pt]
&= \frac{t}{n} \Big( 1- \frac{1}{n} \Big)^{t-1} \\[6pt]
\end{aligned} \end{equation}$$
Your answer has a slight error where you have conflated the sample mean and the sample total, but most of your working is correct. As $n \rightarrow \infty$ we have $(1-\tfrac{1}{n})^n \rightarrow \exp(-1)$ and $t/n \rightarrow \theta$, so taking these asymptotic results together we can also confirm consistency of the estimator:
$$\hat{g}(\mathbf{X}) = \frac{t}{n} \Big[ \Big( 1- \frac{1}{n} \Big)^n \Big] ^{\frac{t}{n} - \frac{1}{n}} \rightarrow \theta [ \exp (-1) ]^\theta = \theta \exp (-\theta) = g(\theta).$$
This latter demonstration is heuristic, but it gives a nice check on the working. It is interesting here that you get an estimator that is a finite approximation to the exponential function of interest.
Best Answer
This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is:
$$\begin{aligned} L_\mathbf{x}(\theta) &= \prod_{i=1}^n f_X(x_i|\theta) \\[6pt] &= \prod_{i=1}^n \frac{\theta}{x_i^2} \cdot \mathbb{I}(x_i \geqslant) \\[6pt] &\propto \prod_{i=1}^n \theta \cdot \mathbb{I}(x_i \geqslant \theta) \\[12pt] &= \theta^n \cdot \mathbb{I}(0 < \theta \leqslant x_{(1)}). \\[6pt] \end{aligned}$$
This function is strictly increasing over the range $0 < \theta \leqslant x_{(1)}$ so the MLE is:
$$\hat{\theta} = x_{(1)}.$$
Mean-squared-error of MLE: Rather than deriving the distribution of the estimator, it is quicker in this case to derive the distribution of the estimation error. Define the estimation error as $T \equiv \hat{\theta} - \theta$ and note that it has distribution function:
$$\begin{aligned} F_T(t) \equiv \mathbb{P}(\hat{\theta} - \theta \leqslant t) &= 1-\mathbb{P}(\hat{\theta} > \theta + t) \\[6pt] &= 1-\prod_{i=1}^n \mathbb{P}(X_i > \theta + t) \\[6pt] &= 1-(1-F_X(\theta + t))^n \\[6pt] &= \begin{cases} 0 & & \text{for } t < 0, \\[6pt] 1 - \Big( \frac{\theta}{\theta + t} \Big)^n & & \text{for } t \geqslant 0. \\[6pt] \end{cases} \end{aligned}$$
Thus, the density has support over $t \geqslant 0$, where we have:
$$\begin{aligned} f_T(t) \equiv \frac{d F_T}{dt}(t) &= - n \Big( - \frac{\theta}{(\theta + t)^2} \Big) \Big( \frac{\theta}{\theta + t} \Big)^{n-1} \\[6pt] &= \frac{n \theta^n}{(\theta + t)^{n+1}}. \\[6pt] \end{aligned}$$
Assuming that $n>2$, the mean-squared error of the estimator is therefore given by:
$$\begin{aligned} \text{MSE}(\hat{\theta}) = \mathbb{E}(T^2) &= \int \limits_0^\infty t^2 \frac{n \theta^n}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_0^\infty \frac{t^2}{(\theta + t)^{n+1}} \ dt \\[6pt] &= n \theta^n \int \limits_\theta^\infty \frac{(r-\theta)^2}{r^{n+1}} \ dr \\[6pt] &= n \theta^n \int \limits_\theta^\infty \Big[ r^{-(n-1)} - 2 \theta r^{-n} + \theta^2 r^{-(n+1)} \Big] \ dr \\[6pt] &= n \theta^n \Bigg[ -\frac{r^{-(n-2)}}{n-2} + \frac{2 \theta r^{-(n-1)}}{n-1} - \frac{\theta^2 r^{-n}}{n} \Bigg]_{r = \theta}^{r \rightarrow \infty} \\[6pt] &= n \theta^n \Bigg[ \frac{\theta^{-(n-2)}}{n-2} - \frac{2 \theta^{-(n-2)}}{n-1} + \frac{\theta^{-(n-2)}}{n} \Bigg] \\[6pt] &= n \theta^2 \Bigg[ \frac{1}{n-2} - \frac{2}{n-1} + \frac{1}{n} \Bigg] \\[6pt] &= \theta^2 \cdot \frac{n(n-1) - 2n(n-2) + (n-1)(n-2)}{(n-1)(n-2)} \\[6pt] &= \theta^2 \cdot \frac{n^2 - n - 2n^2 + 4n + n^2 - 3n + 2}{(n-1)(n-2)} \\[6pt] &= \frac{2\theta^2}{(n-1)(n-2)}. \\[6pt] \end{aligned}$$