When you're combining measurements with different uncertainties, taking the mean is not the right thing to do. (Well, it's good enough if the uncertainties are almost the same.)
The right thing to do is chi-squared analysis, which gives a higher weight to the more accurate measurements. Here's how you do it:
$$\chi^2 = \sum \frac{(\text{observed value} - \text{true value})^2}{\text{(uncertainty associated with that observation)}^2}$$
You numerically choose the "true value" that minimizes $\chi^2$. That's your best guess.
Next, use the chi-square distribution to calculate the p-value (assuming the best guess is right). (Degrees of freedom is one less than the number of observations.) This will tell you whether your uncertainties were reasonable or whether you underestimated them. For example, if one measurement is $5.0 \pm 0.1$, and another measurement is $10.0 \pm 0.1$, then you probably underestimated your uncertainties.
IF you underestimated your uncertainties -- which is not unusual in practice -- then the right thing to do is to figure out where you went wrong in your uncertainty estimation, and correct the mistake. But there is a lazier alternative too, which is often good enough if the stakes are low: You can scale up all the uncertainties by the same factor until you get a reasonable $\chi^2$ p-value, say 0.5.
OK, now you have plausible measurement uncertainties, either because you did from the beginning or because you scaled them up. Next, you try varying the "true value" until the p-value dips down below, say, 5%. This procedure gives you lower bound and upper bound error bars on your final best-guess measurement.
I haven't done this in many years, sorry for any mis-remembering. I think it's discussed in Bevington&Robinson.
When you report the uncertainty in your measurement you basically state "this measurement could have been obtained with the underlying values of X in this range".
That is not the same as saying "X can have any of these values". If you actually want to give a confidence interval you could say something like "there is 95% confidence that X is in the range [0, y]". But in that case, especially with the numbers you give, you might have to deal with the asymmetry of the situation (the interval is no longer +- 1.96 $\sigma$.)
I am not aware of a uniform convention for this case. When in doubt use words to clarify - compared to the effort of the measurement, writing a few words to communicate unambiguously is well worth it.
Best Answer
You evaluate the mean and express the uncertainty in the mean as the standard deviation of the mean.
Your estimate of mean of the sample with $n$ values is $m = {1 \over n} \sum_{j = 1}^{n} y_{j}$ where $y_{j}$ is the $j^{th}$ value in the sample. The standard deviation of the mean for the sample is $S = \sqrt{s^2 \over n}$ where $s = \sqrt{{\sum_{j= 1}^{n} (y_{j} - m)^2} \over {n - 1} }$ is the standard deviation for the sample. You report $m \pm S$.