I think you are missing an $n$ in the maximum likelihood estimator. It is given by the reciprocal of the sample mean. In point (1) the density you found is not correct. You need to have $\theta^n$ instead of $\theta$ as a multiple of the exponential. Thus, the MLE is biased because of the fraction. Observe that your calculations of $E[\hat{\theta}]$ are not correct. Observe that in general
\begin{equation}
E\left[\frac{1}{X} \right ] \neq \frac{1}{E[X]}
\end{equation}
Instead, one can aim at estimating $\beta := \frac{1}{\theta}$ instead of $\theta$. This amounts to a re-parameterization of the distribution. In this case the MLE of $\beta$ is given by $\frac{1}{n} \sum_{i=1}^n x_i$. And because $E[x] = \frac{1}{\theta} = \beta$, and using the linearity of the expectation operator (i.e. $E[\sum x_i] = \sum E[x_i]$), it is clear that such an MLE is unbiased for $\beta$.
Now observe that an exponential distribution is a special case of a Gamma distribution. We have
\begin{equation}
x_i \sim \text{Gamma}(1, \theta)
\end{equation}
and due to the independence and using the properties of the Gamma distribution it holds that
\begin{equation}
\sum_{i=1}^n x_i \sim \text{Gamma}(n, \theta)
\end{equation}
Now the distribution of the inverse of a gamma random variable is the inverse-gamma distribution
Therefore,
\begin{equation}
\frac{1}{\sum_{i=1}^n x_i} \sim \text{Inv-Gamma}(n, \theta)
\end{equation}
whose expectation is given by
\begin{equation}
E\left[\frac{1}{\sum_{i=1}^n x_i}\right] = \frac{\theta}{n-1}
\end{equation}
Using this result, you can find the bias in the MLE of $\theta$. It simply goes as follows
\begin{equation}
E\left[\frac{n}{\sum_{i=1}^n x_i} \right] - \theta = \frac{n \theta}{n-1} - \theta = \frac{1}{n-1} \theta
\end{equation}
hence
\begin{equation}
E\left[\frac{n}{\sum_{i=1}^n x_i} \right] = \frac{n}{n-1}\theta
\end{equation}
Therefore, if you multiply the MLE of $\theta$ by $\frac{n-1}{n}$ it becomes unbiased.
Point (3) asks you to find the distribution of the unbiased estimator, i.e the distribution of the random variable
\begin{equation}
(n-1) \frac{1}{\sum_{i=1}^n x_i}
\end{equation}
which we already computed above (up to a scale by $n-1$), and use it to compute the variance of the unbiased estimator and show that it is not equal to the CRLB.
Pmf of the sample $X_1,X_2,\ldots,X_n$ is
$$f_p(x_1,\ldots,x_n)=p^n (1-p)^{\sum_{i=1}^n x_i-n}\,\mathbf1_{x_1,\ldots,x_n\in\{1,2,\ldots\}}$$
Therefore, we have
\begin{align}
\frac{\partial}{\partial p}\ln f_p(x_1,\ldots,x_n)&=\frac{n}{p}-\frac{1}{1-p}\left(\sum_{i=1}^n x_i-n\right)
\\&=\frac{n}{p(1-p)}-\frac{1}{1-p}\sum_{i=1}^n x_i
\\&=-\frac{n}{1-p}\left(\frac{1}{n}\sum_{i=1}^n x_i-\frac{1}{p}\right)
\end{align}
Thus the score function can be expressed in the form $$\frac{\partial}{\partial p}\ln f_p(x_1,\ldots,x_n)=k(p)\left(T(x_1,\ldots,x_n)-g(p)\right)\tag{*}$$
, for some statistic $T$ and some parametric function $g(p)$. The condition $(*)$ is the equality condition of the Cramer-Rao inequality, which directly shows here that variance of $T=\frac{1}{n}\sum\limits_{i=1}^n X_i$ attains the Cramer-Rao lower bound for $g(p)=1/p$. And since $T$ is unbiased for $1/p$, this also proves that $T$ is the uniformly minimum variance unbiased estimator of $1/p$.
This fact is more to do with properties of the canonical exponential family than with geometric distribution in particular. Here is another example.
Best Answer
For this setting, $T = X_1 + \dots + X_n$ is a complete sufficient statistic. Hence, by the Lehmann-Scheffe Theorem, if we can find a function of $T$ whose expectation is $p(1-p)$, it is an UMVUE.
In any setup, the sample variance $\frac{1}{n-1}\sum (X_i - \bar{X})^2$ is an unbiased estimate of the variance. Since $X^2 = X$ for Bernoulli random variables, \begin{align*} \frac{1}{n-1}\sum (X_i - \bar{X})^2 &= \frac{1}{n-1} \left(\sum X_i^2 - n \bar{X}^2 \right) \\ &= \frac{1}{n-1} \left(\sum X_i - n \bar{X}^2 \right) \\ &= \frac{T(n-T)}{n(n-1)}. \end{align*}
Hence $\frac{T(n-T)}{n(n-1)}$ is an UMVUE for the variance.