Exercise :
Calculate a Maximum Likelihood Estimator for the model $X_1,\dots, X_n \; \sim U(-\theta,\theta)$.
Solution :
The distribution function $f(x)$ for the given Uniform model is :
$$f(x) = \begin{cases} 1/2\theta, \; \; -\theta \leq x \leq \theta \\ 0 \quad \; \; , \quad\text{elsewhere} \end{cases}$$
Thus, we can calculate the likelihood function as :
$$L(\theta)=\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n\mathbb I_{[-\theta,\theta]}(x_i)= \bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[0,\theta]}(|x_i|) $$
$$=$$
$$\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[-\infty,\theta]}(|x_i|)\prod_{i=1}^n \mathbb I_{[0, +\infty]}(|x_i|)$$
$$=$$
$$\boxed{\bigg(\frac{1}{2\theta}\bigg)^n\prod_{i=1}^n \mathbb I_{[-\infty,\theta]}(\max|x_i|)}$$
Question : How does one derive the final expression in the box from the previous one ? I can't seem to comprehend how this is equal to the step before.
Other than that, to find the maximum likelihood estimator you need a $\theta$ sufficiently small but also $\max |x_i| \leq \theta$ which means that the MLE is : $\hat{\theta} = \max |x_i|$.
Best Answer
I don't understand your solution, so I'm doing it myself here.
Assume $\theta > 0$. Setting $y_i = |x_i|$ for $i = 1, \dots, n$, we have
$$\begin{align} L(\theta)=\prod_{i=1}^{n}f_{X_i}(x_i)&=\prod_{i=1}^{n}\left(\dfrac{1}{2\theta}\right)\mathbb{I}_{[-\theta, \theta]}(x_i) \\ &=\left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[-\theta, \theta]}(x_i) \\ &= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(|x_i|) \\ &= \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i)\text{.} \end{align}$$ Assume that $y_i \in [0, \theta]$ for all $i = 1, \dots, n$ (otherwise $L(\theta) = 0$ because $\mathbb{I}_{[0, \theta]}(y_j) = 0$ for at least one $j$, which obviously does not yield the maximum value of $L$). Then I claim the following:
I leave the proof up to you. From the claim above and observing that $y_{(1)} \leq y_{(n)}$, we have $$L(\theta) = \left(\dfrac{1}{2\theta}\right)^n\prod_{i=1}^{n}\mathbb{I}_{[0, \theta]}(y_i) = \left(\dfrac{1}{2\theta}\right)^n\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) \text{.}$$ Viewing this as a function of $\theta > 0$, we see that $\left(\dfrac{1}{2\theta}\right)^n$ is decreasing with respect to $\theta$. Thus, $\theta$ needs to be as small as possible to maximize $L$. Furthermore, the product of indicators $$\mathbb{I}_{[0, y_{(n)}]}(y_{(1)})\mathbb{I}_{[y_{(1)}, \theta]}(y_{(n)}) $$ will be non-zero if and only if $\theta \geq y_{(n)}$. Since $y_{(n)}$ is the smallest value of $\theta$, we have $$\hat{\theta}_{\text{MLE}} = y_{(n)} = \max_{1 \leq i \leq n} y_i = \max_{1 \leq i \leq n }|x_i|\text{,}$$ as desired.