Standard Error – Does Upper/Lower Standard Error Make Sense?: Detailed Examination

standard error

In the given Image you can see a plot with error bars of different length on each upper and lower side. The bars were generated by calculating the standard error once for all points above the mean value and once for those below.

http://i.stack.imgur.com/cVlJg.png

As information on this "method" is very sparse I ask myself wether it is correct to use them. Or is it just the wrong term (lower/upper standard error) I used?
And finally what is the meaning of them?

Best Answer

According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done in your case is to divide the population in two (values upper and lower than the mean) and calculate the standard error of THAT population, not of your real population and therefore, I find it personally strange to assign the length of the error bars in that way. Apparently the idea that was done here was taken from here. However, IMHO, the idea of dividing the sample and calculating an "upper and lower" standard deviation doesn't make much sense (or at least it botters me).

In physics (my area and apparently yours), however, it has been somewhat standard to show 68% confidence intervals for the sample median or the mean (depending on your choice of a location statistic; let's call this statistic $\bar{X}$ for the moment) in the following way for non-symmetric distributions (apparently emulating what would be a central credible interval): with your data points, you calculate $\bar{X}$ and then report an upper error bar of length $L_u$, where $L_u$ is calculated in order to satisfy $P(\bar{X}<\mu<\bar{X}+L_u)= 0.34$, where $\mu$ is the real (unknown) parameter. Then, for your lower error bar of length $L_l$, you repeat the same procedure but now downwards of the location statistic $\bar{X}$, i.e., $P(\bar{X}-L_l<\mu<\bar{X})= 0.34$. Of course, because the distribution of $\bar{X}$ is usually not known this is usually done with non-parametric methods (such as the Bootstrap or some variant of it).

As was also pointed out by @onestop, you can also obtain bayesian credible intervals, where you actually calculate the probability (density, in the continuous case) of your parameter given your data. Let's call this probability $p(x|D)$. The length of the lower error bar is now calculated in a more "natural way" (at least for me), in order to satisfy $P(\hat{x}-L_l<x<\hat{x}|D)=0.34$, and the length of the upper error bar is now calculated in order to satisfy $P(\hat{x}<x<\hat{x}+L_u|D)=0.34$, where $\hat{x}$ is your point estimate of the parameter (usually the median or even the mode).

All of the above, of course, makes sense only if your parameter is unimodal.