Given a sample $x\equiv \{x_i\}_{i=1}^n$, the likelihood is
$$
L(\theta\mid x)=\theta^{-n}1\{\theta\ge M(x),m(x)\ge 0\},
$$
where $M(x):=\lceil\max_{1\le i\le n}x_i\rceil$ and $m(x):=\min_{1\le i\le n}x_i$.
The indicator suggests that $\hat{\theta}_n(x)\ge M(x)$ ($\because$ $L=0$ otherwise). However, taking values larger than $M(x)$ decreases $L$ because of the first term (assuming that $m(x)>0$). Thus, $\hat{\theta}_n(x)= M(x)$.
Under certain regularity conditions (like the ones mentioned here on page 1), maximum likelihood estimators have an asymptotic normal distribution. In particular, distributions which are members of the regular exponential family satisfy these conditions.
For $Y_i=\log X_i$, joint density of $Y_1,\ldots,Y_n$ is
\begin{align}
f_{\theta}(y_1,\ldots,y_n)&=\frac{1}{(\sqrt{2\theta\pi})^n}\exp\left[-\frac{1}{2\theta}\sum_{i=1}^n (y_i-\theta)^2\right]
\\&=\frac{1}{(\sqrt{2\theta\pi})^n}\exp\left[-\frac{1}{2\theta}\sum_{i=1}^n y_i^2+\sum_{i=1}^n y_i-\frac{n\theta}{2}\right]\quad,\small (y_1,\ldots,y_n)\in\mathbb R^n,\,\theta>0
\end{align}
This shows that $f_{\theta}$ is a member of a regular one-parameter exponential family. So we can say that the MLE $\hat\theta$ of $\theta$ has an asymptotic normal distribution, given by
$$\sqrt n(\hat\theta-\theta)\stackrel{L}\longrightarrow N\left(0,\frac{1}{I_{Y_1}(\theta)}\right)\,,$$
where $I_{Y_1}(\theta)=-E_{\theta}\left[\frac{\partial^2}{\partial\theta^2}\ln f_{\theta}(Y_1)\right]$ is the information contained in $Y_1$.
A routine calculation gives $I_{Y_1}(\theta)=\frac{2\theta+1}{2\theta^2}$, so that the limiting distribution is eventually
$$\sqrt n(\hat\theta-\theta)\stackrel{L}\longrightarrow N\left(0,\frac{2\theta^2}{2\theta+1}\right)$$
Best Answer
$1.$ The likelihood $f(x_1,\dots,x_n;θ)$ of the sample $X_1,\dots,X_n$ is equal to $$f_{X_1,\dots,X_n}(x_1,\dots,x_n;θ)=f_{X_1}(x_1;θ)\dots f_{X_n}(x_n;θ)=\left(\frac{\theta}{2}\right)^{\sum_{i=1}^n|X_i|}(1-\theta)^{n-\sum_{i=1}^n|X_i|}$$ Take the $\ln$ to simplify $$\ln f(x_1,\dots,x_n;θ)=\sum_{i=1}^n|X_i|\cdot\ln\left(\frac{\theta}{2}\right)+\left(n-\sum_{i=1}^n|X_i|\right)\cdot\ln (1-\theta)$$ Differentiate with respect to $\theta$ $$\frac{d}{d\theta}\ln f(x_1,\dots,x_n;θ)=\sum_{i=1}^n|X_i|\cdot \frac{1}{θ}+\left(n-\sum_{i=1}^n|X_i|\right)\cdot\frac{1}{\theta-1}$$ and set the derivative equal to $0$ to find the $\hat \theta$ that maximizes the likelihood \begin{align}\sum_{i=1}^n|X_i|\cdot \frac{1}{\hat θ}+\left(n-\sum_{i=1}^n|X_i|\right)\cdot\frac{1}{\hat\theta-1}&=0\iff\\(\hat\theta-1)\sum_{i=1}^n|X_i|+\left(n-\sum_{i=1}^n|X_i|\right)\cdot\hat\theta&=0\iff \\[0.2cm]\hat \theta&=\frac1n\sum_{i=1}^n|X_i|\end{align}
$2.$ Now you can calculate the variance of $\hat\theta$ since $\hat\theta$ is a function of the random variables $X_i, i=1,\dots,n$ and as such a random variable itself
\begin{align}Var(\hat\theta)&=Var\left(\frac1n\sum_{i=1}^n|X_i|\right)\\[0.2cm]&=\frac1{n^2}\sum_{i=1}^nVar\left(|X_i|\right)=\frac1{n^2}\sum_{i=1}^nE|X_i|^2-\left(E|X_i|\right)^2\\[0.2cm]&=\frac{1}{n^2}\sum_{i=1}^n\left(1^2\cdot\theta+0^2\cdot(1-\theta)\right)-\left(1\cdot\theta+0\cdot(1-\theta)\right)^2\\[0.2cm]&=\frac{1}{n^2}\sum_{i=1}^n\theta-\theta^2=\frac{1}{n^2}n\theta(1-\theta)=\frac{1}{n}\theta(1-\theta)\end{align}
$3.$ Now, let $n\to+\infty$ to find the asymptotic variance of the MLE, i.e. the variance when you take a sample of size $n$ with $n\to+\infty$ (a huge sample) which is $$\lim_{n\to+\infty}Var(\hat\theta)=\lim_{n\to+\infty}\frac{1}{n}\theta(1-\theta)=0$$