[Math] Use the maximum likelihood to estimate the parameter $\theta$ in the uniform pdf $f_Y(y;\theta) = \frac{1}{\theta}$ , $0 \leq y \leq \theta$

parameter estimationprobabilitystatistics

(a)
Based on the random sample $Y_1 = 6.3$ , $Y_2 = 1.8$, $Y_3 = 14.2$, and $Y_4 = 7.6$, use the method of maximum likelihood to estimate the parameter $\theta$ in the uniform pdf

$f_Y(y;\theta) = \frac{1}{\theta}$ , $0 \leq y \leq \theta$

(b) Suppose the random sample in part (a) represents the two-parameter uniform pdf

$f_Y(y;\theta_1, \theta_2) = \frac{1}{\theta_2 – \theta_1}$ , $\theta_1 \leq y \leq \theta_2$

Find the maximum likelihood estimates for $\theta_1$ and $\theta_2$.

My attempt

(a) The likelihood function is given by
$L(\theta)= \Pi_{i = 0}^{n}\frac{1}{\theta} = \theta^{-n} $

Take natural log both sides, so ln$L(\theta) = – n$ ln($\theta)$
Take derivative and set it equal to zero.
Thus, $\frac{d}{d\theta}$ln$L(\theta)$ = -$\frac{n}{\theta}$

if we try to solve for theta equal to zero it wont work. So we have to reduce the the
value for $y$ as much as possible. So choose the value of $\theta_e = y_{min} = 1.8$

(b) since the range is dependent on $
\theta$, finding the derivative equal to zero won't work. So the maximun likelihood for $\theta_1e = 14.2$ and the maximun likelihood for $\theta_2e = 1.8$

Can anyone please verify this? If this does not work, can someone please help? Any feedback/hint would help.
Thank you in advance.

Best Answer

Usually to find MLE's for density functions where the domain of the density depends on the parameter is not straightforward in the sense that differentating will not help.

Denote by $\textbf{1}_{[a,b]}(x)$ the indicator function meaning that $$\textbf{1}_{[a,b]}(x)= \begin{cases} 1 \mbox{ if } x\in [a,b] \\ 0 \mbox{ if not.}\end{cases}$$

The density function of a uniform random variable $U(0,\theta)$ is given by $$f_{\theta}(x) = \frac{1}{\theta} \textbf{1}_{[0,1]}(\theta).$$

Given a sample of $n$ random observations: $Y_1,\dots, Y_n$ (it is convenient to write observations with small letters and random variables with big letters, so when I write big $Y$'s means that these are computations before we collect data) the likelihood funtcion is given by: $$L(\theta|Y_1,\dots, Y_n)= \prod_{i=1}^n f_{\theta}(Y_i) = \frac{1}{\theta^n}\prod_{i=1}^n \textbf{1}_{[0,\theta]}(Y_i).$$

Now look at the product of indicator functions. We should try to write this as a function of $\theta$ instead of as a function of all the $Y_i$'s in order to know exactly what is the domain of $L$. This product is not zero only if $0<Y_i<\theta$ for all the $i=1,\dots,n$. In other words, $0<\min_i Y_i < \max_i Y_i < \theta$. So the domain of the likelihood function is only for $\theta>\max_{i=1,\dots,n} Y_i$. That is $$L(\theta|Y_1,\dots, Y_n)= \frac{1}{\theta^n} \textbf{1}_{[\max_{i=1,\dots,n} Y_i, \infty)}(\theta).$$

Now, the function $1/\theta^n$ function is strictly decreasing on $[0,\infty]$. So its maximum must be attained at the left value. In conclusion, the MLE of $\theta$ is given by the maximum values of the sample of $Y_i$'s which is quite intuitive: $$\hat{\theta}_{MLE} = \max_{i=1,\dots,n} Y_i.$$

Now try to reproduce the same computations and ideas when we have a uniform distribution depending on two parameters: $f_{\theta_1, \theta_2} (x) = \frac{1}{\theta_2- \theta_1}$ for $\theta_1 < y < \theta_2$. Intuition tells that if you have a sample of $Y_1,\dots, Y_n$ then the MLE's of $\theta_1$ and $\theta_2$ should be: $$ \hat{\theta_1} = \min_{n=1,\dots,n} Y_i \mbox{ and } \hat{\theta_2} = \max_{n=1,\dots,n} Y_i.$$

Observe that the estimators are expressed with capital letters. So they are random. Each time you collect a new sample you get different estimates. Hence when you collect a sample of observations $y_1,\dots, y_n$ (small letters) the estimates are: $$ \hat{\theta_1} = \min_{n=1,\dots,n} y_i \mbox{ and } \hat{\theta_2} = \max_{n=1,\dots,n} y_i.$$ (Just substitute the values you got in the exercise)

I hope this helped! ;)