Solved – Significance test in linear regression with only one predictor

hypothesis testingregression

As far as I know, I have two options for tests in linear regression. The $F$-test for the model (if it explains more variance than it has error variance) and the $t$-test (to see if the slope is not zero). With more than one predictor, I can see why both tests are there. But in my case, I only have one predictor. In my opinion, the $F$-test and the $t$-test do the same in this case because if the slope is zero, it is exactly the model it is compared to in the $F$-test.

Where is the flaw in my logic?

Best Answer

No flaw. If your model is $y=\alpha + \beta x + \varepsilon$, your t-statistic is distributed as $t_{n-2}$, where $n$ is the number of observations and $2$ the number of parameters (intercept and slope), your F-statistic is distributed as a $F_{1,n-2}$.

In general, if $X\sim t_{n}$, then $X^2\sim F_{1,n}$. This if why you can see that the F-statistic is equal to the square t-statistics (the one for the slope) and their p-values are equal. In this sense the t-test and the F-test "do the same".