The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way to get probability theory to answer your question
In the example, $Y_{1}$ and $Y_{2}$ are the same type of variables (fractions/percentages) so they are comparable. I will assume that you fit the same model to both. So we have two models:
$$M_{1}:Y_{1i}\sim Bin(n_{1i},p_{1i})$$
$$log\left(\frac{p_{1i}}{1-p_{1i}}\right)=\alpha_{1}+\beta_{1}X_{i}$$
$$M_{2}:Y_{2i}\sim Bin(n_{2i},p_{2i})$$
$$log\left(\frac{p_{2i}}{1-p_{2i}}\right)=\alpha_{2}+\beta_{2}X_{i}$$
So you have the hypothesis you want to assess:
$$H_{0}:\beta_{1}>\beta_{2}$$
And you have some data $\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n}$, and some prior information (such as the use of logistic model). So you calculate the probability:
$$P=Pr(H_0|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I)$$
Now $H_0$ doesn't depend on the actual value of any of the regression parameters, so they must have be removed by marginalising.
$$P=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(H_0,\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
The hypothesis simply restricts the range of integration, so we have:
$$P=\int_{-\infty}^{\infty} \int_{\beta_{2}}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
Because the probability is conditional on the data, it will factor into the two separate posteriors for each model
$$Pr(\alpha_{1},\beta_{1}|\{Y_{1i},X_{i},Y_{2i}\}_{i=1}^{n},I)Pr(\alpha_{2},\beta_{2}|\{Y_{2i},X_{i},Y_{1i}\}_{i=1}^{n},I)$$
Now because there is no direct links between $Y_{1i}$ and $\alpha_{2},\beta_{2}$, only indirect links through $X_{i}$, which is known, it will drop out of the conditioning in the second posterior. same for $Y_{2i}$ in the first posterior.
From standard logistic regression theory, and assuming uniform prior probabilities, the posterior for the parameters is approximately bi-variate normal with mean equal to the MLEs, and variance equal to the information matrix, denoted by $V_{1}$ and $V_{2}$ - which do not depend on the parameters, only the MLEs. so you have straight-forward normal integrals with known variance matrix. $\alpha_{j}$ marginalises out with no contribution (as would any other "common variable") and we are left with the usual result (I can post the details of the derivation if you want, but its pretty "standard" stuff):
$$P=\Phi\left(\frac{\hat{\beta}_{2,MLE}-\hat{\beta}_{1,MLE}}{\sqrt{V_{1:\beta,\beta}+V_{2:\beta,\beta}}}\right)
$$
Where $\Phi()$ is just the standard normal CDF. This is the usual comparison of normal means test. But note that this approach requires the use of the same set of regression variables in each. In the multivariate case with many predictors, if you have different regression variables, the integrals will become effectively equal to the above test, but from the MLEs of the two betas from the "big model" which includes all covariates from both models.
No, you cannot state that an independent variable has twice as large an impact on one DV (dependent variable) as another DV merely by comparing coefficients in the models. Why? Because your dependent variables are not measuring comparable quantities in all four cases above.
Let's take a different example to highlight the strangeness: in one model, rainfall predicts annual crop yield in tonnes of grain/acre/year (coef = 0.5) and in a separate model, it also predicts population density in people/acre (coef = 20). Does this mean that rainfall has a stronger influence on population density than on crop yield? Well, suppose you instead measured annual crop yield in kilograms of grain/acre/year, your rainfall coefficient would be 500 (0.5 * 1000, because 1 tonne = 1000 kg). This change in unit would reverse the hierarchy and your conclusions, which obviously does not make sense. So the basic problem is that annual crop yield and population density are not in comparable units.
This could be addressed by standardizing the dependent variables, in which case the coefficient interpretation would be that a unit change in rainfall leads to a change of $x$ standard deviations in either crop yield or population density. A larger coefficient in one model can then be interpreted as evidence for rainfall having a stronger effect on one quantity, given the variation in your data.
Now, you do actually have two DVs that are in comparable units: % poor people by poverty lines A and B. So in principle, you can make the comparison you've asked about for these two cases (but not the others). But you should probably be careful when interpreting this, since both measure precisely the same quantity but with different cutoffs. Differences in the effect of your independent variable are telling you something about the cutoff, which should perhaps be evident before you fitted your model.
Best Answer
The tool you want is called seemingly unrelated regression (SUR). SUR is a way of estimating more than one regression equation on the same data at the same time. Obviously, one thing you can do is just run the two regressions separately. What would be wrong with that? Let's write your model as: \begin{align} DV_{1i} &= \beta_1 + \beta_2 A_i + \beta_3 B_i + \beta_4 C_i + \beta_5 D_i + \epsilon_i \\~\\ DV_{2i} &=\alpha_1 +\alpha_2 A_i +\alpha_3 B_i +\alpha_4 C_i +\alpha_5 D_i + \delta_i \\ \end{align}
It sounds like you are interested in testing hypotheses like $H_0:\beta_2=\alpha_2$. A typical way to test a hypothesis like this by looking to see if a t-statistic is greater than 2 in absolute value: \begin{align} t-stat &= \frac{\hat{\beta}_2-\hat{\alpha}_2}{\sqrt{V(\hat{\beta}_2-\hat{\alpha}_2)}}\\ \strut\\ V(\hat{\beta}_2-\hat{\alpha}_2) &= V(\hat{\beta}_2)+V(\hat{\alpha}_2)-2Cov(\hat{\beta}_2,\hat{\alpha}_2) \end{align}
When you run the two models separately, you can read off estimates of $\sqrt{V(\hat{\beta}_2)}$ and $\sqrt{V(\hat{\alpha}_2)}$ from the regression output---the standard errors of the coefficient estimates. But what do you do to get the covariance? Sometimes it may be reasonable to assume that this covariance is zero, but not often. SUR will calculate this covariance for you, making the calculation of the t-statistic possible.
In
R
, I think you wantsystemfit
, and inStata
you definitely wantsureg
.