F-test on two population variance where the F ratio is larger than 1

hypothesis testingstatistics

I am learning F-test for testing if two population variances are the same. I have two samples with sizes $n_1$ and $n_2$.

The null hypothesis is $H_0: \sigma_1 = \sigma_2$.

I am doing a two-tailed test, and so the alternative is $H_1: \sigma_1 \neq \sigma_2$.

So the F ratio is $F=\frac{s_1^2}{s_2^2}$. On various notes, it says that we have to put the highest sample variance on the nominator. So $s_1^2 > s_2^2$. Then it's clear that in such case, F is always larger than $1$.

Why does $F$ have a F-distribution with freedom $n_1-1$ and $n_2-1$? The F-distribution will have values less than $1$, but actually $F$ can never be less than 1.

Best Answer

The convention of reordering to make the ratio at least $1$ comes from printed statistical tables, where making the ratio greater then $1$ save half the space. Something similar happened with tables of the standard normal distribution where $\Phi(x)$ was only given for positive $x$.

If $F(x,d_1,d_2)$ is the cumulative distribution function of an $F$ distribution with $d_1$ and $d_2$ degrees of freedom then $$F\left(\frac1x,d_2,d_1\right)=1- F(x,d_1,d_2)$$ and for example in R you have pf(11,8,3) giving 0.963 while pf(1/11,3,8) gives 0.037. One implies the other, so you not need both in printed tables.

The reason $d_1=n_1-1$ and $d_2=n_2-1$ is the same reason as the reduction of the degrees of freedom with the Student $t$-test. In a hand-waving sense, you took the sample means to be the population means in the calculation of the sample variances; they are not and so you used up one degree of freedom in each case in those calculations.

Related Question