In general, note that maximum likelihood estimators are not necessarily unbiased estimators.
I'm not familiar with Lebesgue integration, but hopefully using non-measure theoretic tools can help you find this.
First of all, observe that
$$\mathbb{E}[X_1]=\dfrac{2}{\theta^2}\int_{0}^{\theta}x^2\text{ d}x=\dfrac{2}{\theta^2}\cdot\dfrac{\theta^3}{3}=\dfrac{2\theta}{3}\text{.}$$
Thus, the estimator
$$\hat{\theta}=\dfrac{3}{2n}\sum_{i=1}^{n}X_i$$
is unbiased for $\theta$, since
$$\mathbb{E}[\hat{\theta}]=\dfrac{3}{2n}\sum_{i=1}^{n}\mathbb{E}[X_i]=\dfrac{3}{2n}\cdot \dfrac{2\theta}{3}\cdot n = \theta\text{.}$$
I don't understand your solution, so I'm doing it myself here.
Assume $\theta > 0$. Setting $y_i = |x_i|$ for $i = 1, \dots, n$, we have
$$\begin{align}
L(\theta)=\prod_{i=1}^{n}f_{X_i}(x_i)&=\prod_{i=1}^{n}\left(\dfrac{1}{2\theta}\right)\mathbb{I}_{[-\theta, \theta]}(x_i) \\
&=\left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[-\theta, \theta]}(x_i) \\
&= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(|x_i|) \\
&= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i)\text{.}
\end{align}$$
Assume that $y_i \in [0, \theta]$ for all $i = 1, \dots, n$ (otherwise $L(\theta) = 0$ because $\mathbb{I}_{[0, \theta]}(y_j) = 0$ for at least one $j$, which obviously does not yield the maximum value of $L$). Then I claim the following:
Claim. $y_1, \dots, y_n \in [0, \theta]$ if and only if $\max_{1 \leq i \leq n}y_i = y_{(n)} \leq \theta$ and $\min_{1 \leq i \leq n}y_i = y_{(1)}\geq 0$.
I leave the proof up to you. From the claim above and observing that $y_{(1)} \leq y_{(n)}$, we have
$$L(\theta) = \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i) = \left(\dfrac{1}{2\theta}\right)^n\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) \text{.}$$
Viewing this as a function of $\theta > 0$, we see that $\left(\dfrac{1}{2\theta}\right)^n$ is decreasing with respect to $\theta$. Thus, $\theta$ needs to be as small as possible to maximize $L$. Furthermore, the product of indicators
$$\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) $$
will be non-zero if and only if $\theta \geq y_{(n)}$. Since $y_{(n)}$ is the smallest value of $\theta$, we have
$$\hat{\theta}_{\text{MLE}} = y_{(n)} = \max_{1 \leq i \leq n} y_i = \max_{1 \leq i \leq n }|x_i|\text{,}$$
as desired.
Best Answer
You are correct that $\mathbf E_\theta X_i = a_i\theta$. Thus $\mathbf E_\theta \bar X_n = \bar a_n \theta $ where $\bar a_n = \sum_{i = 1}^n a_i$.
The method of moments now says to determine $\hat \theta $ such that $\bar X_n = \mathbf E_{\hat\theta} \bar X_n = \bar a_n \hat\theta$, thus $\hat\theta = \frac{\bar X_n}{\bar a_n}$.