Any set in which you can define a 'distance' function which satisfies a few properties (distances are positive, symmetric, and additive). Is called a Metric space. $\mathbb{R}^k$ is a metric space with the distance function typically defined to be $d(\mathbf{x},\mathbf{y}) = |\mathbf{x}-\mathbf{y}|$, the norm of the difference (although we can use whatever distance function we want as long as it satisfies the 3 properties, more on that later).
The norm is defined to be $|\mathbf{x}| = \sqrt{\sum_{i=1}^n x_i^2}$. That right there looks strangely familiar you might think. So if you have some observed values $\mathbf{x}=x_1,\ldots,x_n$ and if we find the distance between your observed values and their mean, $\mu$ we have $d(\mathbf{x},\mu) = |\mathbf{x}-\mu| = \sqrt{\sum_{i=1}^n (x_i-\mu)^2}$ which is almost like the standard deviation (missing a $1/n$ or $1/(n-1)$. However, we can easily redefine our distance function to be something like $d(\mathbf{x},\mathbf{y}) = +\sqrt{1/n}|\mathbf{x}-\mathbf{y}|$ and it will still have the three properties required to make $\mathbb{R}^k$ a metric space.
You might be more familiar with distances in a 2-dimensional space like $\mathbb{R}^2$. In this space we can use the same distance function as above, but since instead of $k$ components we have only 2 the formula simplifies to $d((x_1,y_1), (x_2,y_2)) = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$
The factor of $\sqrt{\frac{2}{\pi}}$ is based on assuming a normal distribution.
If that was a good value to use, this would mean that if you wanted to compute sd from md in large samples you'd multiply by $\sqrt{\frac{\pi}{2}}$
If the data are not close to normal, using that scale factor may not yield a suitable estimate of sample standard deviation.
Considered in terms of sample features, the two respond differently to large and small deviations, so in some samples the ratio of mean deviation (md) to sd may be very close to 1, while in other samples it may be far from 1. [I use md for mean deviation because MAD is often used to stand for median absolute deviation from the median.]
i) consider a sample of 1000 0's and 1000 1's. md/sd $\approx$ 1
ii) consider a sample of one "0", one "1" and 998 "$\frac{_1}{^2}$". md/sd $\approx$ 0.0447
If you were in case (i) and multiplied md by $\sqrt{\frac{\pi}{2}}$ you'd get a number that was 57% too big. If you were in case (ii) and multiplied md by $\sqrt{\frac{\pi}{2}}$ you'd get a number that was only about 7% as big as it should be.
Mean deviation won't exceed standard deviation, but in some cases it can be quite a lot smaller than it. In particular, if tails are heavier than normal, md/sd might be a good deal smaller than in the normal case.
If you also have other information than the mean deviation, you might be able to approximate it a little better.
Best Answer
As pointed out by @Gschneider, it computes the sample standard deviation
$$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$
which you can easily check as follows: