There is. Your alternative formulation of taking the absolute values of the differences instead of squaring them is called the mean absolute deviation (or average absolute deviation).
Both the mean absolute deviation and the standard deviation are used in practice, but much of the reason the standard deviation is more widely used is that it has nicer theoretical properties. For example, the mean and standard deviation are enough to specify which member of the family of normal distributions you are dealing with (edit: although this is convention, as Robert Israel notes in his comment below), and data values $x$ from a normal distribution with mean $\mu$ and standard deviation $\sigma$ can be transformed to data values $z$ from the standard normal distribution via $z = (x - \mu)/\sigma$. Another advantage of the standard deviation, as Robert Israel notes below, is that there is a simple formula for the standard deviation of the sum of independent random variables. (See also the paper referenced below for more on why we use the standard deviation, as well as some arguments in favor of the mean absolute deviation.)
For an answer to your second question, see my answer to "Sample Standard Deviation vs. Population Standard Deviation." In short, if you were calculating the standard deviation of a population rather than a sample, you would divide by the population size $n$. However, when you calculate the standard deviation of a sample, you have to estimate the population mean that would normally be in the formula with the sample mean. Doing so introduces a bias, as the data values tend to be slightly closer to the sample mean than to the population mean (as the sample mean is itself calculated from the data values). It turns out that dividing by $n-1$ rather than $n$ corrects that bias. (Proving that is a standard exercise in beginning statistical theory.)
Going back to your first question, I recent ran across the paper "Revisiting a 90-year-old debate: the advantages of the mean deviation," by Stephen Girard. The paper is worth reading in full, but let me summarize some of his main points.
Reasons for the standard deviation:
- It tends to have a smaller error, on average, when used to estimate a population standard deviation, and so is a more consistent estimate of the standard deviation of a population.
- The mean absolute deviation is much more difficult to manipulate algebraically. This makes developing more sophisticated analyses based on it more difficult.
- It's part of the definition of the widely-used normal distribution.
- Historical: Ronald Fisher, one of the leading figures in the development of statistics, championed its use.
Reasons for the mean absolute deviation:
- The standard deviation distorts the amount of dispersion (by the act of squaring the differences) in a data set.
- The mean absolute deviation tends to work better in the presence of errors in our data observations.
- The mean absolute deviation is less sensitive to outliers in the data (also because of the squaring in the standard deviation).
- It's simpler to understand if all you want is a quick measure of dispersion.
Standard deviation is only a measurement of dispersion of your data in your 3 samples. All three samples will have the same standard deviation if they are supposed identical.
In order to take precision of measurement into consideration, you have to calculate the standard error, which is basically the standard deviation divided by $\sqrt(n)$ where n is the number of measurements youjust used to calculate the mean.
Adding more measurements will then involve a decrease in the standard error.
This standard error (SE) can then be used to calculate a confidence interval, usually using a normal approximation saying that the "true" mean in the overall sample has a probability of 95% being in the interval [mean - 1.96*SE ; mean + 1.96*SE]
EDIT: here is a great course about the distinction between standard error and standard deviation, by Douglas Altman in the BMJ Statistics Notes:
http://www.bmj.com/content/331/7521/903.full.pdf+html
Best Answer
If your data is a sample, you should use the sample variance
$s^2=\frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2$
as it is an unbiased estimator (note the n-1 in the denominator). The standard deviation is given as described above by $s=\sqrt{s^2}$. As Alyosha mentioned, do not use the absolute numbers, but the real values of x given by the experiment.