Likelihood ratio hypothesis test for the exponential distribution.

hypothesis testinglog likelihoodmaximum likelihoodsupremum-and-infimum

I am trying to understand the following logic that I found on the internet that implements a hypothesis test on the exponential distribution using the likelihood ratio test.

We want to test the following hypothesis using a likelihood ratio test.

Test $H_{0}: \theta = \theta_{0}$ against $H_{1}: \theta > \theta_{0}$

The null and alternative parameter spaces is thus defined as follows:

$\mathbf{\Theta_{0}} \in {\theta_{0}}\:, \mathbf{\Theta_{1}} \in [\theta_{0}, \infty).$

The likelihood function is:

$L(\theta, x) = \prod_{i = 1}^{n} f(x_{i} ; \theta) = \theta^{n}e^{-\theta \sum{x_i}}$

The numerator of the likelihood ratio is:

$L(\theta_{0} ; x) = \theta_{0}^{n}e^{-n\theta_{0}\bar{x}}$

We need to find the supremum as $\theta$ ranges over the interval $[\theta_{0}, \infty).$ Now.

$l(\theta,x) = n\log(\theta) – n\theta\bar{x}$

So that:

$\frac{\partial l(\theta, x)}{\partial} = \frac{n}{\theta} – n\bar{x}$

Which is zero only when $\theta = \frac{1}{\bar{x}}.$ Since $L(\theta;x)$ is an increasing function for $\theta < \frac{1} {\bar{x}}$ and decreasing for $\theta > \frac{1}{\bar{x}}$

We can say that the supremum of the set $\{L(\theta; x): \theta \in \Theta\}$ is the following:

$\sup\{L(\theta; x): \theta \in \Theta\} = \begin{cases}
\bar{x}^{-n}e^{-n}, & \text{if }1 / \bar{x} \ge \theta_{0}\\
\theta_{0}^{n}e^{-n\theta_{0}\bar{x}} &\text{if } 1 / \bar{x} < \theta_{0}
\end{cases}$

Why is there different values for the supremum depending on whether the value of $\frac{1}{\bar{x}}$ is greater than equal to $\theta_{0}$ or less that $\theta_{0}$. I think this relates to the previously mentioned statement that $L(\theta;x)$ is an increasing function for $\theta < \frac{1} {\bar{x}}$ and decreasing for $\theta > \frac{1}{\bar{x}}$ but why does this fact relate to yielding two different supremums based on what the value of $1/ \bar{x}$ is?

Thanks in advance.

Best Answer

the density family owns a monotone likelihood ratio. In fact

$$\frac{L(\theta_0|\mathbf{x})}{L(\theta_1|\mathbf{x})}=\left(\frac{\theta_0}{\theta_1} \right)^n \cdot \exp\left\{ (\theta_1-\theta_0)\Sigma_i X_i \right\}$$

is evidently a monotone increasing function in $T=\sum_i X_i$

thus applying the following theorem

enter image description here

you can easily solve the problem using a chi-square distribution

Related Question