Solved – When using the likelihood function, where does the indicator function come from

estimatorsfaqlikelihoodmaximum likelihoodsufficient-statistics

For finding sufficient estimators and MLE's, there are certain distributions that require the indicator function such as the uniform distribution. Where does it come from in simple language? and for MLE's, if you use the indicator function, do you still require to take the log and derivative of the likelihood function to find the MLE?

Best Answer

Most families of distributions $f_\theta$ have a fixed support, $$\text{supp}(f_\theta)=\{x\in\mathcal{X};\ f_\theta(x)>0\}$$ like the Normal or Binomial distributions, but some have a parameter dependent support, like uniforms $\text{U}(0,\theta)$ or $\text{U}(-\theta,\theta)$. For such families, it is important to keep the support constraint in the likelihood, because it brings in hard constraints on $\theta$, which means it can be better estimated in such settings. If one considers a sample $(x_1,\ldots,x_n)$ from the $\text{U}(-\theta,\theta)$ distribution, the likelihood is $$\prod_{i=1}^n \frac{1}{2\theta}\times\mathbb{I}_{(-\theta,\theta)}(x_i) =\frac{1}{(2\theta)^n}\times\prod_{i=1}^n \mathbb{I}_{-\theta\le x_i\le\theta}=\frac{1}{(2\theta)^n}\times\mathbb{I}_{\theta\ge\max |x_i|}$$ This indicator function at the end$$\mathbb{I}_{\theta\ge\max |x_i|}$$ is essential to restrict the set of possible $\theta$'s to $(\max |x_i|,\infty)$, rather than $(0,\infty)$. It thus determines the new parameter set given the sample.

Once the modified parameter set is determined, the likelihood function is defined on that modified set and the derivative does not cover the indicator function. For instance, in the Uniform example, $$\frac{\partial}{\partial\theta}\frac{1}{(2\theta)^n}=-\frac{n}{2^n\theta^{n+1}}$$which is always negative, meaning that the optimum value of $\theta$ is the smallest possible one, i.e., the smallest value in the modified parameter set, $\max |x_i|$.