For the first question, the best unbiased estimator is $\chi\left(\sum_i x_i = n\right)$ as you wrote, because the going probability function for the $n$ observations:
$$
\mathbb{P}\left( X_1=x_1, \ldots, X_n=x_n \right)=p^{x_1}(1-p)^{1-x_1} \cdots p^{x_n} (1-p)^{x_n} = p^{\sum_i x_i} (1-p)^{n - \sum_i x_i}
$$
Thus it factors into $(p^n)^{\chi\left(\sum_i x_i = n\right)} \cdot \left( p^{\sum_i x_i} (1-p)^{n - \sum_i x_i} \right)^{1-\chi\left(\sum_i x_i = n\right)}$.
For the second question $\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i$ is the BUE for $\mu$. The factor of the likelihood that depends on this statistics is $\exp(-\frac{n}{2} \left( \mu - \bar{x} \right)^2 )$.
The variance of $\bar{x}$ is $\mathrm{Var}(\bar{x}) = \frac{1}{n^2} \sum_i \mathrm{Var}(x_i) = \frac{1}{n^2} \cdot n = \frac{1}{n}$, hence the Fisher information is $\mathcal{I}(\mu) = \frac{1}{\mathrm{Var}(\bar{x})} = n$.
For the third question, the joint density for the sample:
$$
f = 2^n \chi_{\theta-\frac{1}{4} \le \min(x_1,\ldots, x_n)} \chi_{\theta+\frac{1}{4} \ge \max(x_1,\ldots,x_n)} = 2^n \chi_{ \max(x_1,\ldots,x_n) -\frac{1}{4} \le \theta \le \min(x_1, \ldots,x_n) + \frac{1}{4} }
$$
Thus $\theta$ is determined by two-component vector statistics consisting of the minimal and maximal element of the sample suitably shifted, and $\theta$ can be anywhere in between. The mean of these two values could be a possible choice for the estimator.
Suppose we have a collection of samples $X_1,X_2,\dots,X_n$ that have been drawn from some fixed probability distribution (Poisson, exponential, normal, etc.) that depends on some unknown parameter $\mu$.
We often want to use the samples to estimate the value of $\mu$. To this end we come up with functions that take in $X_1,X_2,\dots,X_n$ and return an estimate of $\mu$. Any such function is called an estimator.
Now, some estimators are better or worse than others. For example,
$$f(X_1,X_2,\dots,X_n) = 0$$
is clearly a terrible estimator (unless it so happens that $\mu=0$), given that it doesn't use any information provided by the samples.
A basic criteria for an estimator to be any good is that it is unbiased, that is, that on average it gets the value of $\mu$ correct. Formally, an estimator $f$ is unbiased iff
$$E[f(X_1,X_2,\dots,X_n)] =\mu.$$
In your case, the estimator is the sample average, that is,
$$f(X_1,X_2,\dots,X_n)=\frac{1}{n}\sum_{i=1}^n X_i,$$
and it is unbiased since on average it guesses the unknown parameter, $\lambda$, correctly. An example of a biased estimator would be
$$f(X_1,X_2,\dots,X_n)=1+\frac{1}{n}\sum_{i=1}^n X_i,$$
since $E[f(X_1,X_2,\dots,X_n)] = 1+\lambda$. On average it gets the value of $\lambda$ wrong by $1$; it has a bias of $1$.
Returning to the sample average, suppose that the samples are drawn from any distribution (not necessarily Poison) which has an expected value (or mean) of $\mu$. Then
$$E[f(X_1,X_2,\dots,X_n)] = E\left[\frac{1}{n}\sum_{i=1}^n X_i\right] = \frac{1}{n}\sum_{i=1}^n E[X_i] = \frac{1}{n}\sum_{i=1}^n \mu = \mu.$$
So in general, the sample average is an unbiased estimator of the expected value of the distribution from which the samples are drawn.
Best Answer
Assuming the $X_i$ are independent,
we have
\begin{align} E\left[ \bar{X}^2 \right] &= \frac1{n^2}E\left[ \left(\sum_{i=1}^nX_i\right)^2 \right]\\ &= \frac1{n^2}E\left[ \left(\sum_{i=1}^nX_i^2\right)+ 2\sum_{i<j} X_i X_j\right]\\ &= \frac1{n^2}\left[ \left(\sum_{i=1}^nE[X_i^2]\right)+ 2\sum_{i<j} E[X_i]E[ X_j]\right]\\ &= \frac1{n^2} \left[ \left(\sum_{i=1}^n(Var[X_i]+E[X_i]^2)\right)+ n(n-1) \theta^2\right]\\ &= \frac1{n^2} \left[ \left(\sum_{i=1}^n(\theta+\theta^2)\right)+ n(n-1) \theta^2\right]\\ &= \frac1{n^2}(n \theta + n^2 \theta^2)\\ &= \theta^2 + \frac{\theta}{n} \end{align}
Hence it is biased.
To make it unbiased,
note that we have
$$E\left[ \bar{X}^2- \frac{\theta}{n}\right] = \theta^2$$
If $U$ is an unbiased estimator for $\theta$, then
$$E\left[ \bar{X}^2- \frac{U}{n}\right] = \theta^2$$
I will leave the task of finding an unbiased estimator for $\theta$ as an exercise.