There is nothing that states that the standard deviation has to be less than or more than the mean. Given a set of data you can keep the mean the same but change the standard deviation to an arbitrary degree by adding/subtracting a positive number *appropriately*.

Using @whuber's example dataset from his comment to the question: {2, 2, 2, 202}. As stated by @whuber: the mean is 52 and the standard deviation is 100.

Now, perturb each element of the data as follows: {22, 22, 22, 142}. The mean is still 52 but the standard deviation is 60.

Think of the difference like any other statistic that you are collecting. These differences are just some values that you have recorded. You calculate their mean and standard deviation to understand how they are spread (for example, in relation to 0) in a unit-independent fashion.

The usefulness of the SD is in its popularity -- if you tell me your mean and SD, I have a better understanding of the data than if you tell me the results of a TOST that I would have to look up first.

Also, I'm not sure how the difference and its SD relate to a correlation coefficient (I assume that you refer to the correlation between two variables for which you also calculate the pairwise differences). These are two very different things. You can have no correlation but a significant MD, or vice versa, or both, or none.

By the way, do you mean the standard deviation of the mean difference or standard deviation of the difference?

**Update**

OK, so what is the difference between SD of the difference and SD of the mean?

The former tells you something about how the measurements are spread; it is an estimator of the SD in the population. That is, when you do a single measurement in A and in B, how much will the difference A-B vary around its mean?

The latter tells us something about how well you were able to estimate the mean difference between the machines. This is why "standard difference of the mean" is sometimes referred to as "standard error of the mean". It depends on how many measurements you have performed: Since you divide by $\sqrt{n}$, the more measurements you have, the smaller the value of the SD of the mean difference will be.

SD of the difference will answer the question "how much does the discrepancy between A and B vary (in reality) between measurements"?

SD of the mean difference will answer the question "how confident are you about the mean difference you have measured"? (Then again, I think confidence intervals would be more appropriate.)

So depending on the context of your work, the latter might be more relevant for the reader. "Oh" - so the reviewer thinks - "they found that the difference between A and B is x. Are they sure about that? What is the SD of the mean difference?"

There is also a second reason to include this value. You see, if reporting a certain statistic in a certain field is common, it is a dumb thing to *not* report it, because not reporting it raises questions in the reviewer's mind whether you are not hiding something. But you are free to comment on the usefulness of this value.

## Best Answer

You have to assume that the mean and variance exist, but once you do, $\bar X$ is always an unbiased estimator of the mean, and $s^2$ is always an unbiased estimator of the variance. You don't need to make any parametric assumptions for those to hold. Those are unbiased for normal distributions, Poisson, binomial, exponential,...

EDITNote that, even though $s^2$ is an unbiased estimator of $\sigma^2$, $s$ is (amazingly) a biased estimator of $\sigma$.

EDIT 2To correct a comment from a few hours ago, an unbiased estimator $\hat\theta$ of a parameter $\theta$ has the property that $\mathbb E[\hat\theta]=\theta$, and bias is a technical term in statistics that refers to $\mathbb E[\hat\theta-\theta]$.

The Wikipedia article on bias includes some additional technical details.