Regression Analysis – Comparing Dependent Regression Coefficients with Different Dependent Variables

regression

I am looking to compare regression coefficients between two regression models. Each model has the same four independent variables: two predictors of interest (we'll call them A and B) and two control variables (C and D). The only difference between the two models is that they have different dependent variables: the first model is predicting DV1, while the second model is predicting DV2. All observations are from the same sample, so the regression coefficients are dependent.

I believe that both A and B will more strongly predict DV1 than DV2. In other words, the regression coefficient for A predicting DV1 (controlling for B, C, and D) should be higher in magnitude than the regression coefficient for A predicting DV2 (controlling for B, C, and D). Similarly, the regression coefficient for B predicting DV1 (controlling for A, C, and D) should be higher in magnitude than the regression coefficient for B predicting DV2 (controlling for A, C, and D).

Essentially, I want to test the difference between two dependent regression coefficients from two models that share all of the same IVs, but have different DVs. Is there a formal significance test I can use?

Best Answer

The tool you want is called seemingly unrelated regression (SUR). SUR is a way of estimating more than one regression equation on the same data at the same time. Obviously, one thing you can do is just run the two regressions separately. What would be wrong with that? Let's write your model as: \begin{align} DV_{1i} &= \beta_1 + \beta_2 A_i + \beta_3 B_i + \beta_4 C_i + \beta_5 D_i + \epsilon_i \\~\\ DV_{2i} &=\alpha_1 +\alpha_2 A_i +\alpha_3 B_i +\alpha_4 C_i +\alpha_5 D_i + \delta_i \\ \end{align}

It sounds like you are interested in testing hypotheses like $H_0:\beta_2=\alpha_2$. A typical way to test a hypothesis like this by looking to see if a t-statistic is greater than 2 in absolute value: \begin{align} t-stat &= \frac{\hat{\beta}_2-\hat{\alpha}_2}{\sqrt{V(\hat{\beta}_2-\hat{\alpha}_2)}}\\ \strut\\ V(\hat{\beta}_2-\hat{\alpha}_2) &= V(\hat{\beta}_2)+V(\hat{\alpha}_2)-2Cov(\hat{\beta}_2,\hat{\alpha}_2) \end{align}

When you run the two models separately, you can read off estimates of $\sqrt{V(\hat{\beta}_2)}$ and $\sqrt{V(\hat{\alpha}_2)}$ from the regression output---the standard errors of the coefficient estimates. But what do you do to get the covariance? Sometimes it may be reasonable to assume that this covariance is zero, but not often. SUR will calculate this covariance for you, making the calculation of the t-statistic possible.

In R, I think you want systemfit, and in Stata you definitely want sureg.