I will resist the urge to paste a link to lmgtfy.com, but googling for Cramer Rao uniform distribution will get you many solutions to this problem, as it is nearly the canonical example of the failure of the Cramer Rao bound.
For part a)
If you do not know how to formally compute the expectation of a uniform distribution, spend the weekend reviewing basic probability.
The Lehman Scheffé theorem says that the BUE is a function of the sufficient statistic. I'll let you verify that the statistic is complete.
For part b)
The Cramer Rao lower bound can hold only when you can switch the order of differentiation and integration (among other regularity conditions)
$$\frac{\partial}{\partial\theta}\int T(x)f(x;\theta)dx = \int T(x)\frac{\partial}{\partial\theta}f(x;\theta)dx$$
verify that this does not work and you will verify that the Cramer Rao bound does not hold.
Added as I have heavy duty procrastination syndrome
An unbiased estimator of $\theta$ is an estimator (a function $T(x_1,\dots,s_x)$) that is unbiased ($E(T(x_1,\dots,x_n)|\theta) = \theta$). In the case at hand let us let $T$ be a function of the sufficient statistic $\max(x_1,\dots,x_n)$ (it turns out that this will be the best unbiased estimator by the Lehman-Scheffé theorem, but let us not worry about that now.)
Let $Y=\max(X_1,\dots,X_n)$ be our sufficient statistic. We know that the density of $Y$, $f_Y(y)= ny^{n-1}/\theta^n$, $0<y<\theta$ from our study of order statistics. We can find that
$$E(Y|\theta)=\int_0^\theta y\frac{ny^{n-1}}{\theta^n}dy=\frac{n}{n+1}.$$
So $\frac{n+1}{n}Y=\frac{n+1}{n}\max(X_1,\dots,X_n)$ is an unbiased estimator of $\theta$. Let us compute the variance of our estimator:
$$\text{var}\left(\frac{n+1}{n}Y\right)=\left(\frac{n+1}{n}\right)^2\left[E(Y^2)-\left(\frac{n}{n+1}\theta\right)^2\right]$$
which equals $\frac{1}{n(n+2)}\theta^2$.
But the Fisher information of the uniform distribution is
$$E\left[\left(\frac{\partial}{\partial\theta}\log(1/\theta)\right)^2\right]=\frac{1}{\theta^2}.$$
So the Cramer-Rao theorem says that the variance of any unbiased estimator must be greater than $\theta^2/n$. But we have an estimator that has variance $\frac{\theta^2}{n(n+2)}<\theta^2/n$. This is because we cannot switch integration and partial differentiation (essentially because $\theta$ appears in the integral bounds, but I'll let you work that out.)
Please note that if $T$ is an unbiased estimator of $\theta$, then $g(T)$ is not necessarily an unbiased estimator of $g(\theta)$, even if $g$ is bijective. Let's check if your estimator $e^{-\bar X}$ is a biased estimator of $e^{-\theta}$ or not.
\begin{align}
E\left[e^{-\bar X}\right] &= E\left[e^{-\frac{\sum X_i}{n}}\right] \\
&= \left(E\left[e^{-\frac{X_1}{n}}\right]\right)^n \\
&= \left(e^{\frac{1}{2n^2}} e^{-\frac{\theta}{n}} \right)^n \\
&= e^{\frac{1}{2n}} e^{-\theta} \neq e^{-\theta}
\end{align}
I used in the second line the fact that all the $X_i$ are i.i.d. random variables, and I used Wolfram Alpha to compute the integral $E\left[e^{-\frac{X_1}{n}}\right] = \int_{\mathbb{R}} e^{-\frac{x}{n}} f(x)dx$, where $f$ is the pdf of a $N(\theta,1)$ distribution, although this integral should be relatively straightforward (but rather tedious) to compute by hand.
As we can see, $e^{-\bar X}$ is indeed a biased estimator of $e^{-\theta}$, but the bias becomes smaller and smaller as $n \rightarrow \infty$. However, it's not difficult now to come up with an unbiased estimator of $e^{-\theta}$: Just multiply the original estimator by $e^{-\frac{1}{2n}}$.
Best Answer
For the first question, the best unbiased estimator is $\chi\left(\sum_i x_i = n\right)$ as you wrote, because the going probability function for the $n$ observations:
$$ \mathbb{P}\left( X_1=x_1, \ldots, X_n=x_n \right)=p^{x_1}(1-p)^{1-x_1} \cdots p^{x_n} (1-p)^{x_n} = p^{\sum_i x_i} (1-p)^{n - \sum_i x_i} $$ Thus it factors into $(p^n)^{\chi\left(\sum_i x_i = n\right)} \cdot \left( p^{\sum_i x_i} (1-p)^{n - \sum_i x_i} \right)^{1-\chi\left(\sum_i x_i = n\right)}$.
For the second question $\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i$ is the BUE for $\mu$. The factor of the likelihood that depends on this statistics is $\exp(-\frac{n}{2} \left( \mu - \bar{x} \right)^2 )$.
The variance of $\bar{x}$ is $\mathrm{Var}(\bar{x}) = \frac{1}{n^2} \sum_i \mathrm{Var}(x_i) = \frac{1}{n^2} \cdot n = \frac{1}{n}$, hence the Fisher information is $\mathcal{I}(\mu) = \frac{1}{\mathrm{Var}(\bar{x})} = n$.
For the third question, the joint density for the sample: $$ f = 2^n \chi_{\theta-\frac{1}{4} \le \min(x_1,\ldots, x_n)} \chi_{\theta+\frac{1}{4} \ge \max(x_1,\ldots,x_n)} = 2^n \chi_{ \max(x_1,\ldots,x_n) -\frac{1}{4} \le \theta \le \min(x_1, \ldots,x_n) + \frac{1}{4} } $$ Thus $\theta$ is determined by two-component vector statistics consisting of the minimal and maximal element of the sample suitably shifted, and $\theta$ can be anywhere in between. The mean of these two values could be a possible choice for the estimator.