Linear Constraints in Regression Model (Self Study)

hypothesis testingregressionstatistical-inference

I did an F-test on a very simple linear regression model for test if the two coefficients have the same effect on the dependent variable y:

i will call this below Original Regression :

$y_t=\hat\beta_1+\hat\beta_2x_2+\hat\beta_3x_3$

and this is the output in gretl of my regression

enter image description here

this is the main idea of what i'm testing:

$H0: \beta_2=\beta_3$

$H1: \beta_2\neq\beta_3$

my professor told that for run this type of test i have to rearrange my model in an

Equivalent Regression:

$y_t=\hat\beta_1+(\hat\beta_2-\hat\beta_3)x_2+\hat\beta_3(x_2+x_3)$

so now i can express the equivalent hypothesis sistem:

$H0: \beta_2-\beta_3=0$

$H1: \beta_2-\beta_3\neq0$

the F-statistics:

$F=\frac{[ER]RSS-[OR]RSS}{m}*\frac{T-k}{[OR]RSS}$

$m$ are the restrictions: in this case $m=1$

so now i have an output of gretl about my test

enter image description here

from this point on i tried to get the same result of the test in gretl but i can't get the same result.

i write my try: the only thing i need for run my test id $[ER]RSS$

but the second output of gretl gave me the S.E of Regression, so $S=3.11144$

since i know that the estimate of Variance of regression is $S^2=\frac{[ER]RSS}{T-k}$

$T-k=5-3$

so i solved and obtained

$[ER]RSS=19.36211$

so the F-stat:

$F=\frac{19.36211-6.25}{1}*\frac{5-3}{6.25}=4.1958$

this is different from gretl output! of $F=7.29383$

gretl seems to use a derivation of $S^2=\frac{RSS}{T-k+1}$ so if you solve for this you will obtain $[ER]RSS=29.043$ and obtain once again the F-stats you will get the same result $F=7.29383$

My questions are:

(1) where am i doing the test wrong?

(2) how do i get by hand the same coefficients of the output in gretl? i mean $\beta2=1.59549=\beta3$?

Thank You

Best Answer

I hope that I understand your questions correctly..

  1. The most frequently used estimator of variance if the unbiased estimator, i.e., $$ S^2_{\epsilon} = \frac{\sum_{i=1}^T(y_i - \hat{y}_i)^2}{T-k}, $$ where $k$ is the number of estimated coefficients including the intercept. I.e., in a model like $y = \alpha_0 + \alpha_1 x_1 + \alpha_2 x_2$, $k=3$.

  2. If your hypothesis is $H_0: \beta_2 = \beta_3$, then you can re-express the original model $y = \beta_1 + \beta_2 x_1 + \beta_3 x_2 $ as $$ y = \beta_1 + \beta_2x_1 + \beta_2 x_2 = \beta_1 +\beta_2(x_1 + x_2)=\beta_1+\beta_2{x^*}, $$ namely, you are estimating $\beta_2$ which is the coeff. of $x^*$. I.e., you can use the OLS $$ \hat{\beta}_2=\frac{\sum_{i=1}^T(y_i - \bar{y}_n)(x_i - \bar{x}_n)}{\sum_{i=1}^n(x_i - \bar{x}_n)^2}, $$ where in the restricted model will be both the coeff. of $x_1$ and $x_2$.

Related Question