If you set up the data in one long column with A and B as a new column, you then can run your regression model as a GLM with a continuous time variable and a nominal "experiment" variable (A, B). The output of the ANOVA will give you the significance of the difference between the parameters. "intercept' is the common intercept and the "experiment" factor will reflect differences between the intercepts (actually overall means) between the experiments. the "Time" factor will be the common slope, and the interaction is the difference between the experiments with respect to the slope.
I have to admit I cheat (?) and run the models separately first to get the two sets of parameters and their errors and then run the combined model to acquire the differences between the treatments (in your case A and B)...
From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consider, perhaps $Y=a D^b S^c$, then for multiple linear regression one can use $\ln (Y)=\ln (a)+b \ln (D)+c \ln (S)$ by taking the logarithms of the Y, D, and S values, so that this then looks like $Y_l=a_l+b D_l+c S_l$, where the $l$ subscripts mean "logarithm of." Now, this may work better than the linear model you are using, and, the answers are then relative error type.
To verify what type of model to use try one and check if the residuals are homoscedastic. If they are not then you have a biased model, then do something else like model the logarithms, as above, one or more reciprocals of x or y data, square roots, squaring, exponentiation and so forth until the residuals are homoscedastic. If the model cannot yield homoscedastic residuals then use multiple linear Theil regression, with censoring if needed.
How normally the data is distributed on the y axis is not required, but, outliers can and often do distort the regression parameter results markedly. If homoscedasticity cannot be found then ordinary least squares should not be used and some other type of regression needs to be performed, e.g. weighted regression, Theil regression, least squares in x, Deming regression and so forth. Also, the errors should not be serially correlated.
The meaning of the output: $z = (a_{1} - b_{1}) / \sqrt{SE_{a_{1}}^2 + SE_{b_{1}}^2 )}$, may or may not be relevant. This assumes that the total variance is the sum of two independent variances. To put this another way, independence is orthogonality (perpendicularity) on an $x,y$ plot. That is, the total variability (variance) then follows Pythagorean theorem, $H=+\sqrt{A^2+O^2}$, which may or may not be the case for your data. If that is the case, then the $z$-statistic is a relative distance, i.e., a difference of means (a distance), divided by Pythagorean, A.K.A. vector, addition of standard error (SE), which are standard deviations (SDs) divided by $\sqrt{N}$, where SEs are themselves distances. Dividing one distance by the other then normalizes them, i.e., the difference in means divided by the total (standard) error, which is then in a form so that one can apply ND(0,1) to find a probability.
Now, what happens if the measures are not independent, and how can one test for it? You may remember from geometry that triangles that are not right angled add their sides as $C^2=A^2+B^2-2 A B \cos (\theta ),\theta =\angle(A,B)$, if not refresh your memory here. That is, when there is something other than a 90-degree angle between the axes, we have to include what that angle is in the calculation of total distance. First recall what correlation is, standardized covariance. This for total distance $\sigma _T$ and correlation $\rho_{A,B}$ becomes $\sigma _T^2=\sigma _A^2+\sigma _B^2-2 \sigma _A \sigma _B \rho_{A,B}$. In other words, if your standard deviations are correlated (e.g., pairwise), they are not independent.
Best Answer
To compare models you need to have replicates at each level of the predictor. This allows you to partition the SS(residual) into SS(lack of fit) plus SS(pure error). Ideally SS(LOF)->zero. You can test this using an F-test.(https://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares). If your models all have two parameters then that with lowest F will be the best choice. It may be that the theory you rest a model on is desirable or perhaps you are just interested in minimizing the SS(residual).The important thing is that you have replicates. Without them you cannot partition the SS(residual).