Solved – the difference between standard error and confidence interval in error bars

standard errorstatistical significance

I am learning the error bars to use them in my research. I am confused about the difference between the standard error and confidence interval. Which one is better to show the statistical significance?

Best Answer

Standard error of the estimate refers to one standard deviation of the distribution of the parameter of interest, that are you estimating. Confidence intervals are the quantiles of the distribution of the parameter of interest, that you are estimating, at least in a frequentist paradigm.

Consider this example in R:

random_normal <- rnorm(n = 10,
                       mean = 10,
                       sd = 2)
m <- lm(random_normal ~ 1)
summary(m)

Call:
lm(formula = random_normal ~ 1)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.3769 -1.1994 -0.8571  0.4477  4.0356 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  10.7324     0.7154      15 1.13e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.262 on 9 degrees of freedom


confint(m)

> confint(m)
                   2.5 %   97.5 %
(Intercept)     9.114169 12.35064

Please note, that CI (here it's conventionally set up to 95%) here are wider that estimated mean 10.7324+-0.7154. But they can be the same, if we'll set confidence interval to 68% level (instead of 95%), we'll get virtually the same answer, as 10.7324+-0.7154:

    > confint(m, level = 0.67)
              16.5 %   83.5 %
(Intercept) 9.995764 11.46905

Side note: regressing with syntax random_normal ~ 1 is the same as estimating the mean. It's just for convenience to quickly obtain standard errors and CI in this particular case.

Reporting both won't hurt, to my opinion. But CI's, in general, are more versatile.