The adjustments are only to the standard errors of the regression coefficients, not to the point estimates of the coefficients themselves. So you can gather the requested statistics from the traditional OLS output in SPSS. The Hayes and Cai, 2007 paper elaborates on this, as well.
To note, perhaps it is a difference between fields but I almost always see these types of standard errors referred to by their originators (Huber, White and Eicker). There are other types of "robust" estimates and standard errors though (e.g. estimated by the jack-knife or bootstrapping). Sometimes these other estimators do have different point estimates for the coefficients and the standard errors of the coefficients (not always though).
The simplest way to solve your immediate problem, with most of your data fitting simple linear regression well except for data from one depth, is to separate out the issue of the model itself from that of the display of the model results. For the one depth that required a transformation of variables, back-transform the regression fit into the original scale before plotting. For that one depth you will have a curve rather than the straight lines that characterized the other depths, but you should still have a useful x-intercept and the slope of the curve near that intercept will be a start for comparisons of slopes among depths.
You should, however, consider why this particular depth seems to have such different properties from the other depths. Is it an extreme of depth values, perhaps beyond some type of boundary (with respect to temperature, mixing, etc) versus the other depths? Or might it just be that the measurements at that particular depth had some systematic errors, in which case you shouldn't be considering them at all? Such scientific and technical issues are much more important than the details of the statistical approaches.
For the broader issues raised in your question, the assumptions underlying linear models are discussed extensively on this site, for example here. Linearity of outcome with respect to the predictor variables is important, but other assumptions like normal distributions of errors mainly affect the ability to interpret p-values. If there is linearity with respect to predictor variables, the regression will still give a useful estimate of the underlying relation. Generalized linear models provide a way to deal with errors that are a function of the predicted value, as you seem to have for that one troubling depth.
Note that your experimental design, if it is an observational study based on concentrations of chemicals measured at different depths, already violates one of the assumptions of standard linear regression, as there presumably are errors in the values of the predictor variables. What you really have in that case is an error-in-variables model. In practice that distinction is often overlooked, but your regression models (and those of most scientists engaged in observational rather than controlled studies) already violate strict linear regression assumptions.
Finally, although I appreciate that you have already done much data analysis, consider whether you really should use concentration ratios as predictor variables. Ratios are notoriously troublesome, particularly if a denominator can be close to 0. Almost anything that can be accomplished with ratios as predictors can be done with log transformations of the numerator and denominator variables. As I understand your situation, you have a single outcome variable (rate of production of some chemical) and multiple measured concentrations of other chemicals; you then examined various ratios of those other chemicals as predictors for the outcome variable. If you instead formed a combined regression model that used the log concentrations of all the other chemicals as the predictors of the outcome, you might end up with a more useful model, which may show unexpected interactions among the chemicals and still can be interpreted in terms of ratios if you wish.
Best Answer
The robustbase package has an anova.lmrob function for performing a robust analysis of deviance for two competing, nested linear regression models m1 and m2 fitted by lmrob - for example, m1 includes only an intercept and m2 which includes the intercept plus all the predictors you are interested in say, X1 and X2):
In the anova command, the models are forced to be listed from largest (m2) to smallest (m1) due to computational reasons.
The anova output will return a P-value comparing the reduction in robust deviance achieved by including the predictors in the intercept-only model. If this reduction is significant, then the inclusion of the predictors is warranted.