R Regression – How to Test Linear Restriction

econometricsrregression

I would like to test a linear restriction in R. Instead of the usual $\beta_i=0$, I want to test if $\beta_k=0.5$ and $\beta_j=-0.5$.

Is there a way to do this using lm command, and just writing a different formula for the models that lm command uses?

I was thinking of writing $y-0.5x_k+0.5x_j=\sum_{i\neq j,k} x_i\beta_i+(\beta_k-0.5)x_k+(\beta_j+0.5)x_j$. However, I'm not sure how to code this reparametrization…

P.S:Would writing $y−I(−0.5x_k+0.5x_j)$ ~ $\sum x_i$ as the formula to be run on lm, work as 2 t-tests?

Any help would be appreciated.

Best Answer

I think your current parameterization is sufficient to test either coefficient restriction (via the t-tests for each coefficient), but not both restrictions at the same time. To do that one can conduct an F-test between the unrestricted and the restricted model (since it is nested).

Example below in R using the car package.

library(car)
set.seed(10)
x1 <- rnorm(100)
x2 <- rnorm(100)
x3 <- rnorm(100)
y <- 5*x1 + 0.5*x2 - 0.5*x3 + rnorm(100)
m1 <- lm(y~x1+x2+x3)
linearHypothesis(m1,c("x2 = 0.5","x3 = -0.5"),test="F")

Which then spits out a table with the RSS for each model and the degrees of freedom, needed to calculate the F-statistic.

Linear hypothesis test

Hypothesis:
x2 = 0.5
x3 = - 0.5

Model 1: restricted model
Model 2: y ~ x1 + x2 + x3

  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1     98 107.21                           
2     96 107.10  2   0.10752 0.0482  0.953

You answered you own question in terms of testing individual coefficients, you can transform the variable on the left hand side. To replicate the linearHypothesis function I trick R here by making the transformed variable:

#equivalent F test - can trick R by making y a new variable
y <- y -0.5*x2 +0.5*x3
m2 <- lm(y~x1)
anova(m2,m1)

Which reproduces the earlier table. You can use this same trick to get the default summary t-test's in the regression output to test against the alternative hypotheses:

#test for either coefficient restriction (with updated y)
m3 <- lm(y~x1+x2+x3)
summary(m3)
#you can see m3 is equivalent to m1 - just changes the location of the test
#anova(m1,m3)

Which produces for coefficient estimates:

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.208282   0.107351   1.940   0.0553 .  
x1           5.045149   0.112973  44.658   <2e-16 ***
x2           0.033042   0.110693   0.299   0.7660    
x3          -0.004875   0.110122  -0.044   0.9648   

To test with the original values, user2864849 answers that question. You just subtract out the null from the point estimate and use the same standard error. So using the coefficients from m1 you can see below t_est reproduces the t-statistic in model 3.

#to see how it is just the original t-test with the location changed for x2
pt_est <- summary(m1)$coefficients[3,1] #point estimate x2
se_est <- summary(m1)$coefficients[3,2] #standard error x2
t_est <- (pt_est - 0.5)/se_est          #t-stat for null of 0.5
t_est