The mean of a dataset is often represented by the Greek letter $\mu$, and the standard deviation of a dataset is often represented by the Greek letter $\sigma$. But what about the standard error? I've seen authors use SE, se, $\sigma_\bar{x}$, and $s_\bar{x}$. The Wikipedia article on standard error uses both SE and $\sigma_\bar{x}$. Is there a standard or commonly used symbol to refer to the standard error of a set of measurements, like $\mu$ for mean and $\sigma$ for standard deviation?
Descriptive Statistics – Is There a Standard Symbol for Standard Error?
descriptive statisticsnotationstandard error
Related Solutions
Note $Var(\hat{\beta}_0) = Var(\bar{y} - \hat{\beta}_1\bar{x}) = Var(\bar{y}) + \bar{x}^2Var(\hat{\beta}_1) - 2Cov(\bar{y},\hat{\beta}_1)$. Try to show that the covariance term is 0.
The $Var(\hat{\mu}) = \dfrac{\sigma^2}{n}$ fact (although I'm not a fan of the notation they used here) is used in the calculation, $Var(\bar{y}) = \dfrac{\sigma^2}{n}$.
The question "why is this term used, rather than this other term" is, like much terminology, a matter of historical happenstance. Sometimes the outcome is felicitous and sometimes less so. I think that with a little context it makes reasonable sense in this case.
Yule used it in 1897$^{[1]}$, in the context of a particular phrase that I think makes its intent pretty clear:
"We see that $\sigma_1\sqrt{1 - r^2}$ is the standard error made in estimating $x$"
[This is in turn quoted in the Oxford English Dictionary and is mentioned in the Standard Error entry (by John Aldrich) in $[2]$.]
Here it is with a little context (NB the journal is long out of copyright):
Yule later extended that use to estimating other quantities.
I think "the standard error-made-in-estimating" a quantity is clear enough, and once the origin is clear, the shorthand standard error isn't so obscure.
I'm not sure "uncertainty" would not be subject to similar issues (the technical meaning differing from the ordinary meaning); uncertainty might easily be interpreted as hesitation, for example. Whatever word we use we still have to make the actual technical meaning clear.
Of course, like the term or not, once people start to treat such a term as conventional, like the QWERTY keyboard, it's entrenched; you're pretty much stuck with it.
$[1]$ Yule, G.U. (1897), "On the Theory of Correlation," Journal of the Royal Statistical Society, 60, 812-854
$[2]$ Miller, J. "Earliest Known Uses of Some of the Words of Mathematics".
http://jeff560.tripod.com/s.html
(alternate: https://mathshistory.st-andrews.ac.uk/Miller/mathword/s/)
Best Answer
A subscript on a symbol often indicates what the symbol refers to. For example, $\mu_X$ is often used to represent the population mean of the variable $X$, and it would important to use it to distinguish it from $\mu_Y$, the population mean of variable $Y$. Usually, a hat (e.g., $\hat \mu _X$) indicates that a quantity is an estimator of the parameter over which that hat is placed (i.e., $\hat \mu _X$ is an estimator of $\mu_X$). (In this case, it happens to be that the sample mean, $\bar X=n^{-1}\sum_i{X_i}$, is often used for $\hat \mu _X$, but other estimators are possible as well.) When only one variable is being discussed or the parameter in general is being discussed, you can omit the subscript with the understanding that the symbol refers to what you intend it to.
The standard error is the standard deviation of the distribution of an estimator for a given population under specified sampling conditions. Because it's the standard deviation ($\sigma$) of an estimator (hat) of a parameter (e.g., $\theta$), it makes sense to use $\sigma _{\hat\theta}$. This is the standard notation that I have seen. When $\bar X$ is the chosen estimator, $\sigma_{\bar X}$ could also be used to be more specific. When talking about standard errors broadly, it makes sense to just use the words "standard error" or its common abbreviation, SE. When talking about the standard error of a specific estimator, it makes sense to use its symbol to reduce ambiguity.
Note that in data applications, we often deal with estimates from an estimator of the standard error, i.e., $\hat \sigma _{\hat \theta}$, which itself has a standard error because it is an estimator and its estimates vary from sample to sample. We might denote that standard error as $\sigma _{\hat \sigma _{\hat \theta}}$. This might be relevant if you are comparing multiple estimators of the standard error and you want the one that is the most precise, i.e., that itself has a low standard error. For example, the maximum likelihood, unbiased least squares, and HC0 sandwich standard errors are all estimators of the standard error of a regression slope, but the unbiased least squares estimator tends to have the lowest standard error (i.e., is the most precise estimator of the true standard error of the least-squares estimator of the regression slope).