For the first question, the best unbiased estimator is $\chi\left(\sum_i x_i = n\right)$ as you wrote, because the going probability function for the $n$ observations:
$$
\mathbb{P}\left( X_1=x_1, \ldots, X_n=x_n \right)=p^{x_1}(1-p)^{1-x_1} \cdots p^{x_n} (1-p)^{x_n} = p^{\sum_i x_i} (1-p)^{n - \sum_i x_i}
$$
Thus it factors into $(p^n)^{\chi\left(\sum_i x_i = n\right)} \cdot \left( p^{\sum_i x_i} (1-p)^{n - \sum_i x_i} \right)^{1-\chi\left(\sum_i x_i = n\right)}$.
For the second question $\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i$ is the BUE for $\mu$. The factor of the likelihood that depends on this statistics is $\exp(-\frac{n}{2} \left( \mu - \bar{x} \right)^2 )$.
The variance of $\bar{x}$ is $\mathrm{Var}(\bar{x}) = \frac{1}{n^2} \sum_i \mathrm{Var}(x_i) = \frac{1}{n^2} \cdot n = \frac{1}{n}$, hence the Fisher information is $\mathcal{I}(\mu) = \frac{1}{\mathrm{Var}(\bar{x})} = n$.
For the third question, the joint density for the sample:
$$
f = 2^n \chi_{\theta-\frac{1}{4} \le \min(x_1,\ldots, x_n)} \chi_{\theta+\frac{1}{4} \ge \max(x_1,\ldots,x_n)} = 2^n \chi_{ \max(x_1,\ldots,x_n) -\frac{1}{4} \le \theta \le \min(x_1, \ldots,x_n) + \frac{1}{4} }
$$
Thus $\theta$ is determined by two-component vector statistics consisting of the minimal and maximal element of the sample suitably shifted, and $\theta$ can be anywhere in between. The mean of these two values could be a possible choice for the estimator.
Quite simply, define $Y \sim \operatorname{Bernoulli}(1/n)$, hence $$\hat \theta_n = \bar X + nY;$$ consequently, $$\operatorname{E}[\hat \theta_n] = \operatorname{E}[\bar X + nY] = \operatorname{E}[\bar X] + n \operatorname{E}[Y] = \theta + n(1/n) = \theta + 1.$$
Best Answer
I can't comment (yet), so I'll add this as an answer.
I will assume that $\bar{X}_n =\frac{1}{n}\sum_{i=1}^n X_i$.
1) In this setting, consistency means that $\hat{\theta}_n\to \theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = \begin{cases} 0 & \text{with probability } 1 -1/n\\ n & \text{with probability } 1/n \end{cases}$$ converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $\mathbb{E}(\hat{\theta}_n) - \theta \to 0$ as $n\to\infty$. Here, compute $\mathbb{E}(\hat{\theta}_n) - \theta$ and see what you can conclude about it's limit as $n\to\infty$.