As you probably know, whenever you make a measurement, there is a distribution of values you might get, due to noise, systematic errors, etc. Only in very contrived circumstances is this distribution symmetric about the mean, and only in even more restrictive cases is it Gaussian, but nonetheless we often make those assumptions for simplicity, waving our hands and chanting "Central Limit Theorem."
Sometimes, though, we want to convey a bit more about the shape of this distribution than just its mean and some proxy for its "width." The next-simplest thing to do is report an average and a confidence interval. Instead of $10\pm3$, which means "mean of $10$, and a $68\%$1 chance the value is within $3$ of $10$," we might say $10^{+6}_{-2}$, which means "mean of $10$, a $16\%$ chance the value is below $8$, and a $16\%$ chance the value is above $16$." The percentiles are chosen so as to encompass the same area under the PDF of the distribution as in the symmetric case, but we allow for the distribution to be asymmetric. Whatever the distribution is (and we have to have some way of getting at it, either analytically or through enough repeated trials), we choose a lower percentile such that the probability of being below this is the same as the probability of being less than $1\sigma$ below the mean in a Gaussian distribution, and ditto for the upper value.
So to address the points explicitly:
- $10^{+6}_{-2}$ means you are as likely to be above $16$ as below $8$, despite the average being $10$.
- In simple cases you can calculate these by finding the appropriate percentiles from a known PDF/CDF. Probably the most illustrative case is with a continuous Poisson variable with small mean, say $1.10$. In a unit of time, you expect $1.10$ events. The standard deviation is about $1.05$, but saying $1.10\pm1.05$ signals that negative counts are not too unlikely. Instead, you might say you expect $1.10^{+3.01}_{-0.67}$. In more complicated cases, one often runs the data through the pipeline many times, varying the parameters slightly, to estimate how the final measurement is likely to vary as the unknown parameters vary over reasonable values.
- You can't really convert back to something symmetric without knowing the full underlying distribution. For some purposes it may be acceptable to keep the same width, so $+6, -2$ becomes $\pm4$.
- It's useful whenever there is an asymmetric underlying distribution of measurements, and further use of the reported values should take that into account. If your experiment crucially depends on the voltage not dropping below $120\ \mathrm{V}$, then a $4$-$\sigma$ voltage of $130^{+40}_{-4}\ \mathrm{V}$ is acceptable, whereas if someone just told you they could supply $130\pm22\ \mathrm{V}$, you could not trust the source.
1 This comes from $\int_{-\sigma}^\sigma (1/\sqrt{2\pi\sigma^2}) \mathrm{e}^{-x^2/(2\sigma^2)} \ \mathrm{d}x \approx 0.68$. If we had implicitly discussing $2$-$\sigma$ uncertainties rather than $1$-$\sigma$, the limits of integration would have doubled and we would say $95\%$.
$n \lambda = 2 d \sin \theta \Rightarrow 0 = 2 \Delta d\,\sin \theta + 2 d \cos \theta \,\Delta \theta$ which is the same as your equation because you can substitute $n \lambda = 2 d \sin \theta$ into your equation to get mine.
Both routes produce $\Delta d = - d \cot \theta \,\Delta \theta$ where $\Delta \theta $ is in radians.
What are your values which produce such a large error?
Update as a result of a comment from the OP.
Using $n=1$, $\lambda=63.095 \rm pm$, $\theta = 7.27°$, the instrument
used has $\Delta \theta = 0.09°$, then I get $d = 249.298 \rm pm$ and
$\Delta d = 175.878 \rm pm$.
$\Delta \theta$ has to be in radians $(0.09^\circ \rightarrow 0.00157\, \rm radian)$ and this results in $\Delta d = 3 \,\rm pm$.
$\Delta\theta = |[\sin(\theta
> -\Delta\theta)-\sin(\theta+\Delta\theta)]|$
seems to be an estimate of the error in $\sin \theta$ ie $\Delta (\sin \theta)$ assuming that the angle can be measure to about half a degree.
That error, $\Delta (\sin \theta)$, is made equal to $\Delta \theta$ rather than $\Delta (\sin \theta)$ because the angles are small which is equivalent to you making $\cos \theta$ in your equation equal to one.
Best Answer
Here are some thoughts about this question - but not a definitive answer: I don't believe a definitive answer exists.
In general, error propagation is founded on the assumption that the distribution of the errors is Gaussian, and that the error is small compared to the value of the quantity. In that case, a simple propagation of errors is possible.
For example, assuming that the error is small, you should be able to take the derivative of your function with respect to each of the variables - then you can use that to determine the error in the result.
For example, your case of
$$F =\sqrt{A^2+B^2}$$
Derivative with respect to A, B:
$$\frac{\partial F}{\partial A} = \frac{2A}{\sqrt{A^2+B^2}}\\ \frac{\partial F}{\partial B} = \frac{2B}{\sqrt{A^2+B^2}}$$
If the error in A is $\Delta A$, and the error in B is $\Delta B$, then the total expected error is the sum of squares of errors:
$$\Delta F = \frac{\sqrt{(2A\Delta A)^2 + (2B\Delta B)^2}}{\sqrt{A^2+B^2}}$$
However, the moment you state that your distribution is not symmetrical, the situation changes. If you have a sufficient number of variables with "small but non Gaussian" error distributions, the central limit theorem tells us that the result will nonetheless be Gaussian distributed: in that case you can compute the standard deviation of the (non Gaussian) individual distributions, and use those as a surrogate in your error propagation calculation. But if you have a small number of variables (like in your example), AND the distribution is not Gaussian, then there is no method I'm aware of to solve the question analytically. It can, however, be addressed with a simple Monte Carlo simulation.
In a Monte Carlo simulation, you sample the distributions of your input variables, and transform them according to the formula; you can then plot the resulting output distribution, and compute its shape etc.
The upper and lower limits of the output can in principle be computed by setting the input variables to their extreme values (this is sometimes done for "worst case analysis"); but it is rare that that gives you any really useful insights, since error distributions most often describe something stochastic rather than deterministic (which means that an upper limit is almost never "hard"). And as I said - the moment you have more than a small number of variables with similar weights, the output distribution will start to look Gaussian.