as ShannonC pointed out why not run regression y3=y1/y2 ~ x1,x2,...
however,if that is not possible(e.g. you don't have the original data) you can use taylor expansion to understand how y3 is influenced by x1,x2,...
let's re-state:
$$y_{k,i} = b_{k,0} + \sum_j b_{k,j} x_{j,i}$$
for k=1,2
let's write the taylor expansion around $(b_{1,0},b_{2,0})$
$$y3=f(y1,y2) = f(b_{1,0},b_{2,0}) + \frac{df(b_{1,0},b_{2,0})}{dy1} (y1-b_{1,0}) +
\frac{df(b_{1,0},b_{2,0})}{dy2} (y2-b_{2,0})
$$
now, everything is expressed in terms of the $b_{k,j}$ coefficients. in your particular case:
$$ y_3 = \frac{b_{1,0}}{b_{2,0}} + \frac{1}{b_{2,0}} (\sum_j b_{1,j} x_{j,i}) -
\frac{b_{1,0}}{(b_{2,0})^2} (\sum_j b_{2,j} x_{j,i}) \\
= \frac{b_{1,0}}{b_{2,0}} + \sum_j [\frac{1}{b_{2,0}} b_{1,j} -
\frac{b_{1,0}}{(b_{2,0})^2} b_{2,j} ] x_{j,i}
$$
i hope this makes sense and answers your question
few notes:
1)you can use any other type of regression not only linear and any other function of the $y$ variables
2) if 1st order approximation doesn't work use more derivatives
3) must be careful when $b_{2,0}=0$
Best Answer
I understand that you want to test the hypotheses $$H_0: A=B$$ $$H_1: A\not=B$$.
So we are performing inference on the quantity $A-B$ and evaluating whether it is sufficiently far from 0 relative to its standard error. Ordinarily, in the single-parameter case, we just compare, e.g., $A$ to its standard error. But we have two parameters here. So we have $$\sqrt{\frac{(A-B)^2}{\text{var}(A)+\text{var}(B)-2\text{cov}(A,B)}}$$ as our test statistic, where $\text{cov}(A,B)$ is the covaraince of $A$ and $B$. Now we compare the test statistic to a standard normal distribution.
Your last comment:
confuses me, because it suggests that your null hypothesis is not that $H_0:A=B$, but instead that $H_0:A=-B$, or $H_0:A=1/B$. While not typical, this certainly might make sense for your particular research topic.