I think your current parameterization is sufficient to test either coefficient restriction (via the t-tests for each coefficient), but not both restrictions at the same time. To do that one can conduct an F-test between the unrestricted and the restricted model (since it is nested).
Example below in R using the car
package.
library(car)
set.seed(10)
x1 <- rnorm(100)
x2 <- rnorm(100)
x3 <- rnorm(100)
y <- 5*x1 + 0.5*x2 - 0.5*x3 + rnorm(100)
m1 <- lm(y~x1+x2+x3)
linearHypothesis(m1,c("x2 = 0.5","x3 = -0.5"),test="F")
Which then spits out a table with the RSS for each model and the degrees of freedom, needed to calculate the F-statistic.
Linear hypothesis test
Hypothesis:
x2 = 0.5
x3 = - 0.5
Model 1: restricted model
Model 2: y ~ x1 + x2 + x3
Res.Df RSS Df Sum of Sq F Pr(>F)
1 98 107.21
2 96 107.10 2 0.10752 0.0482 0.953
You answered you own question in terms of testing individual coefficients, you can transform the variable on the left hand side. To replicate the linearHypothesis
function I trick R here by making the transformed variable:
#equivalent F test - can trick R by making y a new variable
y <- y -0.5*x2 +0.5*x3
m2 <- lm(y~x1)
anova(m2,m1)
Which reproduces the earlier table. You can use this same trick to get the default summary t-test's in the regression output to test against the alternative hypotheses:
#test for either coefficient restriction (with updated y)
m3 <- lm(y~x1+x2+x3)
summary(m3)
#you can see m3 is equivalent to m1 - just changes the location of the test
#anova(m1,m3)
Which produces for coefficient estimates:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.208282 0.107351 1.940 0.0553 .
x1 5.045149 0.112973 44.658 <2e-16 ***
x2 0.033042 0.110693 0.299 0.7660
x3 -0.004875 0.110122 -0.044 0.9648
To test with the original values, user2864849 answers that question. You just subtract out the null from the point estimate and use the same standard error. So using the coefficients from m1
you can see below t_est
reproduces the t-statistic in model 3.
#to see how it is just the original t-test with the location changed for x2
pt_est <- summary(m1)$coefficients[3,1] #point estimate x2
se_est <- summary(m1)$coefficients[3,2] #standard error x2
t_est <- (pt_est - 0.5)/se_est #t-stat for null of 0.5
t_est
There is a simple procedure (for your case (2) ) illustrated at https://stackoverflow.com/a/24964685, but there the x-axis shows the product $X_i X_j$, not $X_i$. The underlying logic, I assume, is that if you are concerned about the linearity of the relationship captured by the $\beta_k something$ then you construct the partial residual with that "something", the regressor $k$ which, in (2) will be the regressor formed by the product of $X_i X_j$, etc. I guess that could also be used for case (1) (interestingly, plotting $e + \hat{b_2} x_2 + \hat{b_{22}} x_2^2$ against $x_2$ is already discussed in R. Dennis Cook's paper "Exploring partial residual plots"; see section 3., p. 354).
I think a more general way to deal with interactions and partial residual plots can be found in John Fox and Sanford Weisberg's Visualizing Lack of Fit in Complex Regression Models: Adding Partial Residuals to Effect Displays (see pp. 19 and ff.). These ideas are implemented in the effects CRAN package by John Fox and collaborators. In a question I asked in stackoverflow about one of the functions I provide code that shows how to compute and plot those partial residuals for your cases (2) and (3).
Your case (3) is handled in the effects
package by evaluating that sum as you indicate (each case at its observed values of $X_i$ and $D_i$) and then plotting the partial residual vs $X_i$ conditioning on $D$ (i.e., with different panels for different values of $D$).
With interactions between continuous predictors (case (2)), one must slice one of the predictors, and that can introduce some bias (see slides 21 and 34 of Fox and Weisberg).
Best Answer
There is a function in r
corvif()
which can be found inAED
package. For examples and references see Zuur et al. 2009. Mixed Effects Models and Extensions in Ecology with R pp. 386-387. The code for the package is available from the book website http://www.highstat.com/book2.htm.