I am learning the error bars to use them in my research. I am confused about the difference between the standard error and confidence interval. Which one is better to show the statistical significance?
Solved – the difference between standard error and confidence interval in error bars
standard errorstatistical significance
Related Question
- Solved – the difference between standard error and margin of error and when to use which
- Solved – Displaying mean +/- st. error or confidence interval on bar charts
- Solved – Qualitative difference between boxplots and error bars (descriptive vs inferrential statistics?)
- Solved – Prediction Interval , Confidence Interval , Standard error
- Proportion Statistics – Calculating Standard Error of Difference Between Two Independent Proportions
Best Answer
Standard error of the estimate refers to one standard deviation of the distribution of the parameter of interest, that are you estimating. Confidence intervals are the quantiles of the distribution of the parameter of interest, that you are estimating, at least in a frequentist paradigm.
Consider this example in R:
Please note, that CI (here it's conventionally set up to 95%) here are wider that estimated mean 10.7324+-0.7154. But they can be the same, if we'll set confidence interval to 68% level (instead of 95%), we'll get virtually the same answer, as 10.7324+-0.7154:
Side note: regressing with syntax
random_normal ~ 1
is the same as estimating the mean. It's just for convenience to quickly obtain standard errors and CI in this particular case.Reporting both won't hurt, to my opinion. But CI's, in general, are more versatile.