One problem is that you've written
$$Y=α+β⋅X$$
That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just a matter of some simple algebra. But, in usual regression you only have $E(Y|X)=α+β⋅X $ ; you've left the error term out of your model. If transformation from $Y$ back to $Y_{orig}$ is non-linear, you may have a problem since $E\big(f(X)\big)≠f\big(E(X)\big)$, in general. I think that may have to do with the discrepancy you're seeing.
Edit: Note that if the transformation is linear, you can back transform to get estimates of the coefficients on the original scale, since expectation is linear.
Your model and its estimates posit that
$$\sqrt{Y} = 2.1014 D - 3.0147 + \varepsilon$$
where $D$ is Dose.Back
(or its logarithm) and $\varepsilon$ is a random variable of zero expectation whose standard deviation is approximately $16.28.$ Squaring both sides gives
$$Y = (2.1014 D - 3.0147 + \varepsilon)^2.$$
Adding $0.01$ to $D$ yields the value
$$(2.1014 (D + 0.01) - 3.0147 + \varepsilon')^2.$$
The difference is
$$2(2.1014 D - 3.0147 + \varepsilon)(\varepsilon' - \varepsilon + (0.01)(2.1014)) + (\varepsilon' - \varepsilon + (0.01)(2.1014))^2.$$
This expression, as well as its expectation, are complicated. Let us therefore focus on the simpler question of how the expectation of $Y$ varies with $D$. Note that
$$\eqalign{
\mathbb{E}(Y) &= \mathbb{E}\left(2.1014 D - 3.0147 + \varepsilon\right)^2 \\
&= (2.1014D - 3.0147)^2 + 2(2.1014D - 3.0147) \mathbb{E}(\varepsilon) + \mathbb{E}(\varepsilon^2) \\
&=(2.1014D - 3.0147)^2 + 0 + (16.28)^2.
}$$
(This result is of considerable interest in its own right because it reveals the role played by the mean squared error in interpreting the relationship between $D$ and $Y$.)
When $0.01$ is added to $D$ the value of $\mathbb{E}(Y)$ increases by
$$2(2.1014)(2.1014D - 3.0147)(0.01) + 2.1014(0.01)^2.$$
The last term $2.1014(0.01)^2 \approx 0.0002$ is so small compared to the squared errors (with their typical value of $16.28$) that we may neglect it. In this case, to a good approximation, this fitted model associates an (additive) increase in $D$ of $0.01$ with an increase in $Y$ of
$$2(2.1014)(2.1014D - 3.0147)(0.01) = 0.0883176 D - 0.126.$$
When $D$ is the natural logarithm of some quantity $d$, a 1% multiplicative increase in $d$ causes a value of approximately $0.01$ to be added to $D$, because
$$\log(1.01 d) = \log(1.01) + \log(d) = \left(0.01 - (0.01)^2/2 + \cdots\right) + D \approx 0.01 + D.$$
If you used a logarithm to another base $b$, entailing $D = \log_b(d) = \log(d)/\log(b),$ then a 1% multiplicative increase in $d$ causes a value of approximately $(0.01)/\log(b)$ to be added to $D$, so everywhere "$0.01$" occurs in the preceding formulas you must use $(0.01/\log(b))$ instead.
Best Answer
You don't just exponentiate the parameter when you back-transform (even when it's positive).
You still have a decreasing relationship after exponentiation.
e.g. if you fit a model like $\log(Y) = \beta_0+\beta_1x+\epsilon$ then you will have a fitted relationship on the log scale like $\widehat{\log(Y)} = \widehat{\beta_0}+\widehat{\beta_1}x$. Then
\begin{eqnarray} e^\widehat{\log(Y)} &=& e^{\widehat{\beta_0}+\widehat{\beta_1}x}\\ &=& e^{\widehat{\beta_0}}\,e^{\widehat{\beta_1}x}\\ &=& B_0\,e^{\widehat{\beta_1}x} \end{eqnarray}
What does this relationship look like when $\widehat{\beta_1}$ is negative? It's a decreasing function of $x$.
Be warned however; if you just exponentiate a least squares fit on the log scale, you don't have a mean any more after transforming back.