The proper way to combine relative error when multiplying/dividing

error-propagationpercentages

I need to multiply two independently-gathered single-variable data points with percent errors. Looking online, I have found two contradictory ways to combine the percent errors. One way is to simply sum them together; the other is to take the square root of the sum of the squares. At least in physics, I've always used the first way, but I have found several websites online supporting either position. Which one is the correct way? If they are both correct, what situations do I use each method in?

Site supporting the first method:
1
Sites supporting the second method:
2
3

I noticed that in the second site, they said that all of their formulas were premised on the two data points being independent. That makes me feel more that the second method is correct for my case, but I'm not totally sure.

Best Answer

Neither of these is correct, although they are both approximately correct if the errors are small. The exact answer is the following. If you're multiplying a quantity which you are uncertain about by a factor of $1 \pm p$ with a quantity which you are uncertain about by a factor of $1 \pm q$, then you are uncertain about their product by a factor of

$$(1 \pm p)(1 \pm q) = 1 \pm p \pm q + pq.$$

This is simple arithmetic. For example, if you have 10% uncertainty about the first quantity and 8% uncertainty about the second quantity, then you're uncertain about their product by a factor of

$$(1 \pm 0.1)(1 \pm 0.08) = 1 \pm 0.18 + 0.008.$$

Note that this is not symmetric about $1$; the upper bound is $1.1 \times 1.08 = 1.188$ and the lower bound is $0.9 \times 0.92 = 0.828$. You can say conservatively that the uncertainty in the product is $1 \pm 0.188$, or 18.8%, but this loses a little bit on the lower end.

If $p$ and $q$ are small then $pq$ is very small and $\pm p \pm q + pq$ is approximately $\pm p \pm q$, which is where "add the relative errors" comes from, but it's worth knowing that this is an approximation that breaks down if $p$ and $q$ are not small (or if you are multiplying many terms).


If you read the second link more carefully it tells you to compute the square-root-of-sum-of-squares for the absolute error of a sum (at least I think that's what it's doing; it's not very clear since the term "SE" hasn't been defined at all); this isn't a method being suggested for the relative error of a product at all. The main reason you'd compute the error this way is if you have reason to believe that your errors are well-modeled by independent normal / Gaussian distributions. This square-root-of-sum-of-squares behavior governs how independent Gaussians add: we have

$$N(0, \sigma_1) + N(0, \sigma_2) \sim N(0, \sqrt{\sigma_1^2 + \sigma_2^2}).$$

But this is a specific modeling assumption that may break down. The exact expression I give above gives worst-case bounds which are independent of the nature of the error as long as you actually know a bound on it.