**Question**: How to construct standard errors and test statistics for a ratio of regression coefficients?

**Background:** I am running the same regression on two related outcomes

$$

Y_1 = \alpha_1 + \theta_1 X_1 + \beta 'X +\epsilon_1

$$

$$

Y_2 = \alpha_1 + \theta_2 X_1 + \beta 'X +\epsilon_2

$$ (where $\epsilon_1, \epsilon_2$ are independent).

I am interested in the ratio of the coefficients from these two regressions. In particular, I want to test the following

$$

H_0: \frac{\theta_1}{\theta_2}=1, H_A: \frac{\theta_1}{\theta_2}\neq1

$$

and also the one-sided version of this test for $ \frac{\theta_1}{\theta_2}>1$.

## Best Answer

Assuming (as you do in a comment) that the $\theta_i$ have the same sign and $\theta_2$ is nonzero, the null hypothesis is algebraically equivalent to

$$H_0: \theta_1 = \theta_2$$

while the two alternative hypotheses are equivalent to

$$H_A:\theta_1\ne\theta_2\quad\text{and}\quad H_A^\prime:\theta_1\lt\theta_2\lt 0\text{ or } 0\lt \theta_2\lt\theta_1.$$

Since all the $\epsilon_1$ are independent of the $\epsilon_2,$ all the $Y_1$ responses are independent of the $Y_2$ responses and so (presuming the estimates are separately computed, one set for the $Y_1$ data and another set for the $Y_2$ data) the parameter estimates $\hat\theta_i$ are independent.

How you proceed depends on circumstances.To sketch the general approach, let's suppose you would use a $t$ test or $Z$ test in either regression alone. This means the combined information of assumptions and data is strong enough to suggest the sampling distributions of the $\hat\theta_i$ are approximately Normal with estimated variances $\sigma_i^2$ respectively. Consequently, the test statistic$$\hat\theta = \hat\theta_1 - \hat\theta_2$$

is approximately Normal with sampling variance

$$\operatorname{Var}(\hat\theta) = \operatorname{Var}(\hat\theta_1) + \operatorname{Var}(\hat\theta_2) = \sigma_1^2 + \sigma_2^2.$$

You would therefore refer the test statistic

$$Z = \frac{\hat\theta_2 - \hat\theta_2}{\sqrt{\sigma_1^2 + \sigma_2^2}}$$

to the Standard Normal distribution with distribution function as $\Phi.$ The critical region for testing $H_0$ against $H_A$ for a test with confidence $1-\alpha$ therefore is

$$\mathcal{C}(\alpha) = (-\infty, \Phi^{-1}(\alpha/2)]\ \cup\ [\Phi^{-1}(1-\alpha/2),\infty).$$

## Comments

Unless you're pretty sure what the common sign of the $\theta_i$ is, $\mathcal{C}(\alpha)$ would be the critical region for $H_A^\prime,$ too.

When both the coefficient estimates $\hat\theta_i$ indicate they are significantly different from zero with the same sign, for testing $H_A^\prime$ you could instead replace $\mathcal C(\alpha)$ by the one-sided region

$$\mathcal C^\prime(\alpha) = (-\infty, \Phi^{-1}(\alpha)]$$

(for a common negative estimate) or its negative (for a common positive estimate), thereby achieving greater power. Although the confidence of this test would be

slightlyless than $1-\alpha,$ the difference shouldn't matter. A quick simulation study adapted to data like yours would remove any doubts.When one (or both) of the datasets is "small" (around $20$ or less, approximately), consider a Welch t-test instead of a Z test.