Solved – Error bars on bar graphs: Is reporting confidence intervals really better than reporting standard errors of the means

barplotconfidence intervaldata visualizationstandard error

I have heard this advice repeatedly however recently when I was looking at my own graph with CIs I had a panic attack because the error bars overlapped, yet my analysis told me the difference between the means was significant. I later learned
here that I was making an incorrect assumption that the CIs couldn't overlap. I don't know how idiosyncratic my error was to me, or if there's any reason to suspect those who see my bar graph might draw the same erroneous conclusions. Do you recommend one over the other? Are CIs really less prone to misinterpretation than SEs?

Best Answer

I don't know whether standard errors or confidence intervals are more liable to misinterpretation & suspect there's not much in it. If pairwise differences in parameter estimates are of particular interest you should report them together with their SEs/CIs, & thus forestall readers' drawing wrong conclusions from overlapping, or non-overlapping, SEs/CIs of individual parameter estimates.

Reporting CIs is usually preferable for estimates whose sampling distribution is highly skewed: reporting SEs is rather an invitation to imagine a corresponding (symmetric) normal confidence distribution around the point estimate; & the intervals implied, as well as having incorrect coverage, will do a poor job of separating parameter values better supported by the observed data from those worse supported. (When the sampling distribution is not skewed, but otherwise not well approximated by the normal, e.g. a Student's t distribution with few degrees of freedom, incorrect coverage is usually the only concern.)

Related Question