Solved – Propagation of error in log ratios

errorerror-propagationstandard deviationstandard error

I have experimental "before" and "after" measurements that I need to compare. The standard way in the field is to look at the log ratio, $y = \log_2(\frac{a}{b})$, so no change is $0$. I'm trying to find the correct way to calculate the uncertainty for this representation. It's safe here to assume covariance is zero.

$$ y = \log_2(\frac{a}{b}) = \log_2(a)-\log_2(b) $$

I've tried solving both representations. The first (quotient) edited to correct mistakes:

$$ z = \frac{a}{b}, \space \space y = \log_2(z) $$
$$ \sigma_z = z\sqrt{\left(\frac{\sigma_a}{a}\right)^2 + \left(\frac{\sigma_b}{b}\right)^2} $$
$$ \sigma_y = \frac{\sigma_z}{z \ln(2)} $$
$$ \sigma_y = \frac{\sqrt{\left(\frac{\sigma_a}{a}\right)^2 + \left(\frac{\sigma_b}{b}\right)^2}}{\ln(2)} $$

The second (difference) edited to correct mistakes:

$$ y_a = \log_2(a), y_b = \log_2(b), y = y_a – y_b $$
$$ \sigma_{y_a}^2 = \left( \frac{\sigma_a}{a \ln(2)} \right)^2 ,
\sigma_{y_b}^2 = \left( \frac{\sigma_b}{b \ln(2)} \right)^2 $$
$$ \sigma_y = \sqrt{\sigma_{y_a}^2 + \sigma_{y_b}^2} $$
$$ \sigma_y = \sqrt{\left( \frac{\sigma_a}{a \ln(2)} \right)^2 + \left( \frac{\sigma_b}{b \ln(2)} \right)^2} $$

I've quite possibly messed up the math, but these are not equivalent. What's the right way to calculate the error here? I made errors originally; these now are equivalent!

Bonus: Both of these representations are also equivalent to the equations presented in this thesis by Binu V.S., which deals specifically with log2 ratios. He presents a version with covariance (worth the read if you need it), but omitting covariance:

$$ \sigma_y = \sqrt{ \frac{ \frac{\sigma_a^2}{b^2} + \frac{a^2 \sigma_b^2}{b^4} }{ \left( \frac{a}{b} \ln(2) \right)^2 } } $$

Best Answer

$$y=\log_2 \frac a b=\log_2 a - \log_2 b$$

We know how to propagate errors in subtraction and addition: $$\delta y\approx\delta (\log_2 a)+\delta (\log_2 b)$$

All we need is to handle the logs: $$\delta (\log_2 a)=\frac{\delta a} {a\ln 2}$$

So you get $$\delta y\approx\frac{\delta a} {a\ln 2}+ \frac{\delta b} {b\ln 2}$$ In other words the absolute error of y is the sum of relative errors of a and b.

Of course, you can go for the "exact" formula $$\delta (x+z) = \sqrt{(\delta x)^2+(\delta y)^2}$$ However, remember that it's "exact" to an extent, assuming there are no correlations etc. Hence, I never use it.

Related Question