Hypothesis Testing show LRT is Chi-Square test

hypothesis testingmaximum likelihoodprobability distributionsstatistical-inferencestatistics

Let $(X_1,…,X_n)$ be a random sample with PDF $f(x;\theta) = \frac{x}{\theta}\exp(-x^2/(2\theta)), \theta > 0$

I want to show that the likelihood ratio test of $H_0 : \theta \le \theta_0$ against $H_1 : \theta > \theta_0$ where $\theta_0>0$ is given is a Chi-square test

This gives that the the likelihood function $\displaystyle L(\theta) = \frac{\prod x_i}{\theta^n}\exp(-\sum x_i^2/2\theta)$

I am going to set $t = \prod X_i$ and $s = \sum X_i^2$. So we get $\displaystyle L(\theta) = \frac{t}{\theta^n}\exp(-s/2\theta)$. And $\max_{\theta \ge 0 }L(\theta)$ occurs when $\theta = \frac{s}{2n}$

And $\max_{0 \le \theta \le \theta_0} L(\theta) = \begin{cases}
L(\frac{s}{2n})&\text{if }\theta_0 \ge \frac{s}{2n}\\
L(\theta_0)&\text{else}
\end{cases}$

Now we have

$$
\Lambda_{H_0} = \frac{\max_{0 \le \theta \le \theta_0} L(\theta)}{\max_{0 \le \theta } L(\theta)} = \begin{cases} 1 &\text{if } \theta_0 \ge \frac{s}{2n}\\ \bigg (\frac{s}{2n\theta_0}\bigg)^n\exp(n – s/(2\theta_0))&\text{else}
\end{cases}
$$

Hopefully I have calculated both of those correct, now is where I run into my issue I don't quite see how this is a Chi-square test.

Best Answer

The given density is a Rayleigh. If a sufficient estimator exists, the test must be based on this estimator.

It is very easy to verify, via factorization theorem, that this sufficient statistic is $T=\sum_{i} X_i^2$

Now let's derive the density of $Y=X^2$

Via fundamental transformation theorem you find

$$f_Y(y)=\frac{\sqrt{y}}{\theta}e^{-\frac{y}{2\theta}}\frac{1}{2\sqrt{y}}=\frac{1}{2\theta}e^{-\frac{y}{2\theta}}\sim Exp(\frac{1}{2\theta})=Gamma(1;\frac{1}{2\theta})$$

Now

$$\sum_i X_i^2 \sim Gamma (n;\frac{1}{2\theta})$$

And concluding...

$$\frac{1}{\theta}\sum_i X_i^2\sim \chi_{(2n)}^2$$

To find the critical region, first observe that $\theta_0 < \theta_1$ and

$$\frac{L(\theta_0|\mathbf{x})}{ L(\theta_1|\mathbf{x}) }\propto e^{(\frac{1}{2\theta_1}-\frac{1}{2\theta_0 })\sum_iX_i^2}$$

It is evident that LR is a decreasing function of $T=\sum_iX_i^2$.

Now you can apply Theorem 9.6 taken from Mood Graybill Boes and define the critical region

$$C=\{\mathbf{x}:\sum_iX_i^2>k\}$$

getting a size $\alpha$ UMP Test for $\mathcal{H}_0:\theta \leq \theta_0$ against $\mathcal{H}_1:\theta > \theta_0$ using a chi-square distribution as showed above.

Related Question