From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consider, perhaps $Y=a D^b S^c$, then for multiple linear regression one can use $\ln (Y)=\ln (a)+b \ln (D)+c \ln (S)$ by taking the logarithms of the Y, D, and S values, so that this then looks like $Y_l=a_l+b D_l+c S_l$, where the $l$ subscripts mean "logarithm of." Now, this may work better than the linear model you are using, and, the answers are then relative error type.
To verify what type of model to use try one and check if the residuals are homoscedastic. If they are not then you have a biased model, then do something else like model the logarithms, as above, one or more reciprocals of x or y data, square roots, squaring, exponentiation and so forth until the residuals are homoscedastic. If the model cannot yield homoscedastic residuals then use multiple linear Theil regression, with censoring if needed.
How normally the data is distributed on the y axis is not required, but, outliers can and often do distort the regression parameter results markedly. If homoscedasticity cannot be found then ordinary least squares should not be used and some other type of regression needs to be performed, e.g. weighted regression, Theil regression, least squares in x, Deming regression and so forth. Also, the errors should not be serially correlated.
The meaning of the output: $z = (a_{1} - b_{1}) / \sqrt{SE_{a_{1}}^2 + SE_{b_{1}}^2 )}$, may or may not be relevant. This assumes that the total variance is the sum of two independent variances. To put this another way, independence is orthogonality (perpendicularity) on an $x,y$ plot. That is, the total variability (variance) then follows Pythagorean theorem, $H=+\sqrt{A^2+O^2}$, which may or may not be the case for your data. If that is the case, then the $z$-statistic is a relative distance, i.e., a difference of means (a distance), divided by Pythagorean, A.K.A. vector, addition of standard error (SE), which are standard deviations (SDs) divided by $\sqrt{N}$, where SEs are themselves distances. Dividing one distance by the other then normalizes them, i.e., the difference in means divided by the total (standard) error, which is then in a form so that one can apply ND(0,1) to find a probability.
Now, what happens if the measures are not independent, and how can one test for it? You may remember from geometry that triangles that are not right angled add their sides as $C^2=A^2+B^2-2 A B \cos (\theta ),\theta =\angle(A,B)$, if not refresh your memory here. That is, when there is something other than a 90-degree angle between the axes, we have to include what that angle is in the calculation of total distance. First recall what correlation is, standardized covariance. This for total distance $\sigma _T$ and correlation $\rho_{A,B}$ becomes $\sigma _T^2=\sigma _A^2+\sigma _B^2-2 \sigma _A \sigma _B \rho_{A,B}$. In other words, if your standard deviations are correlated (e.g., pairwise), they are not independent.
As per Bill Huber's comments and answer elsewhere, the trick is to remove the influence of the independent variables on each other whenever producing each sequential regression. In other words instead of:
lm(lm(x ~ y1)$residuals ~ y2)
We want:
lm(lm(x ~ y1)$residuals ~ lm(y2 ~ y1)$residuals)
In this case, we DO get back to the multiple regression:
Moreover, we can show the coefficients are the same:
> round(coef(lm(lm(it30 ~ itpc1)$residuals ~ lm(itpc2 ~ itpc1)$residuals)), 5)
(Intercept) lm(itpc2 ~ itpc1)$residuals #$
0.00000 -0.21846
> round(coef(lm(lm(it30 ~ itpc2)$residuals ~ lm(itpc1 ~ itpc2)$residuals)), 5)
(Intercept) lm(itpc1 ~ itpc2)$residuals #$
0.00000 0.29197
> round(coef(lm(it30 ~ itpc1 + itpc2)), 5)
(Intercept) itpc1 itpc2
0.01186 0.29197 -0.21846
Interestingly, and as expected, if the independent variables are orthogonal as in PCA regression, then we do not need to take out the influence of each of the regressors against each other. In this case it is true that:
lm(lm(x ~ y1)$residuals ~ y2)$residuals
is perfectly correlated with:
lm(x ~ y1 + y2)$residuals
as can be seen here:
This is because the orthogonal principal components have a zero-slope regression line and thus the residuals are equal to the dependent variable (with a vertical translation to mean=0).
Best Answer
It depends on what is your ultimate goal of the regression.
$$E [y_0 - \hat{y_0}]^2 = Var(\hat{y_0}) + [Bias(y_0)]^2 + Var(\epsilon)$$
When there are two strongly correlated predictors in the data base, the matrix $X^T X$ has a determinant closed to zero, wheares, the variance of the estimated value, under homoskedasticity, is $ \hat{\sigma}^2 (X^T X)^{-1}$. So, if the determinant gets close to zero, the variance gets uncomfortably big, the expected test errror gets big too. And it is not good.
Concerning your three questions:
Let's take a simple example in which I simulated two independent random variables $X_1$ and $X_2$, and my response variable is $Y$ such that the true relationship is:
$$Y=1+2*X_1 + 7*X_2 + \epsilon$$
Processing as in the "residual regression" way:
Results:
Not talking about the intercept, the estimated coefficient associated with the variable $X_1$ is 1.3 wheares it should have been 2. This coefficient stays unchanged when you fit the residuals of this first model on the variable $X_2$. Why is the intercept so large, and why is the $\beta_1$ biased? This is to compensate the absence of $x_2$ in the model. And if we take the plot of $Y$ against $X_1$, it can be observed that the trend of $X_1$ can not be easily seen because of the large influence of $X_2$
And if we fit $Y$ on $X_1$ and $X_2$ together, all the estimated value gets the value that we expect, i.e $\beta_0 =1$, $\beta_1 =2$ , $\beta_3 =7$