Are you able to take multiple measurements to be able to estimate the correlations/covariance involved?
It's not really clear how exactly your channel counts enter the formula, but the "dirty solution" works everytime:
- Estimate the covariance matrix OR make a lot of observations of correlated counts
- Based on the estimated covariance matrix, randomly generate a bunch of correlated sets of counts OR just pick the multiple observations you made
- Plug these counts into your formula for dependent variable OR plug all the observations into the formula
- Study the distribution of the results (variance, histogram, etc)
The point is, that if you don't require an analytic solution for the error of dependent variable, you can always do it this way and if you have reliable covariance matrix and a lot of generated OR observed sets of counts, you also obtain whole information about distribution of results, not just variance.
Here are some thoughts about this question - but not a definitive answer: I don't believe a definitive answer exists.
In general, error propagation is founded on the assumption that the distribution of the errors is Gaussian, and that the error is small compared to the value of the quantity. In that case, a simple propagation of errors is possible.
For example, assuming that the error is small, you should be able to take the derivative of your function with respect to each of the variables - then you can use that to determine the error in the result.
For example, your case of
$$F =\sqrt{A^2+B^2}$$
Derivative with respect to A, B:
$$\frac{\partial F}{\partial A} = \frac{2A}{\sqrt{A^2+B^2}}\\
\frac{\partial F}{\partial B} = \frac{2B}{\sqrt{A^2+B^2}}$$
If the error in A is $\Delta A$, and the error in B is $\Delta B$, then the total expected error is the sum of squares of errors:
$$\Delta F = \frac{\sqrt{(2A\Delta A)^2 + (2B\Delta B)^2}}{\sqrt{A^2+B^2}}$$
However, the moment you state that your distribution is not symmetrical, the situation changes. If you have a sufficient number of variables with "small but non Gaussian" error distributions, the central limit theorem tells us that the result will nonetheless be Gaussian distributed: in that case you can compute the standard deviation of the (non Gaussian) individual distributions, and use those as a surrogate in your error propagation calculation. But if you have a small number of variables (like in your example), AND the distribution is not Gaussian, then there is no method I'm aware of to solve the question analytically. It can, however, be addressed with a simple Monte Carlo simulation.
In a Monte Carlo simulation, you sample the distributions of your input variables, and transform them according to the formula; you can then plot the resulting output distribution, and compute its shape etc.
The upper and lower limits of the output can in principle be computed by setting the input variables to their extreme values (this is sometimes done for "worst case analysis"); but it is rare that that gives you any really useful insights, since error distributions most often describe something stochastic rather than deterministic (which means that an upper limit is almost never "hard"). And as I said - the moment you have more than a small number of variables with similar weights, the output distribution will start to look Gaussian.
Best Answer
The first is correct and the second is diabolically wrong.
People get the idea that Poisson errors mean you just take the square root of everything. No! The thing you take the square root of has to be an actual number of events. Any scaling factors are applied afterwards.