The "degrees of freedom" explanation of using $n-1$ for the sample standard deviation is close to hand-waving.
The use of $n$ in calculating population variance, and so population standard deviation, comes from the definition of variance for a set with a number of equally probable outcomes. It is consistent with the definition for discrete distributions which have points with different probabilities (so when there is no $n$) and with continuous distributions which have densities rather than probabilities.
Take for example the set of of equally probable values $(1,3,3,9)$. This has mean 4, variance 9 and standard deviation 3. So too does the set of equally probable values $(1,1,3,3,3,3,9,9)$. And so does the distribution which is $1$ with probability $\frac{1}{4}$, $3$ with probability $\frac{1}{2}$, and $9$ with probability $\frac{1}{4}$. This consistency is helpful.
So why use $n-1$ as the denominator for sample statistics? The reason is bias. Suppose we take a sample (with replacement) of size $n$ from any of these three distributions. Taking the sum of the sample values and dividing by $n$ (the sample mean) gives us an estimate of the population mean, and while the sample mean will often not be 4, its expected value is 4; so it is an unbiased estimator.
Trying the same to estimate the population variance, by taking the sum of squares of the difference between the sample values and the sample mean and then dividing by $n$, will give us something with expected value $9(n-1)/n$, which is slightly less than $9$; so it is a biased estimator of the population variance. It becomes unbiased if multiplied by $n/(n-1)$ which is the equivalent of using $n-1$ in the denominator. So if an unbiased estimator of the variance is important to you, then this is what you do.
You may have other considerations, in which case you can choose have a different estimator of the variance. It is important to note that even if your estimator for the variance is unbiased, its square root is typically not an unbiased estimator of the standard deviation.
In the context of your example: Does it really make a difference which you use? If standard deviation is 's'. The standard error is just s/SQRT(n). This is a linear transformation; therefore for the purpose of comparison it makes no difference which you use.
Now, for the purpose of making a statistical test. If you calculate a group of 'sample means' all independent and identically distributed. Then that sample of 'sample means' would have standard deviation given by s/SQRT(n).
Its intuitive that using a sample mean would give more information of the data, therefore s/SQRT(n) < s; that is to say the variability in the sample of 'sample means' is less than the variability in the individual sample.
TL;DR
-use 's' for the sake of comparison.
-For statistical tests use 's/SQRT(n)' when say finding the distribution or confidence interval for the sample means.
-For statistical tests use 's' when say finding the distribution or confidence interval for individual samples.
Best Answer
It obviously depends on the distribution, but if we assume that the distribution at hand is fairly normal, the full width at half maximum (FWHM) is easy to eye-ball, and as is stated in the given link, it relates to the standard deviation $\sigma$ as $$FWHM \approx 2.36\sigma$$ for a normal distribution.
Edit: Let's try to apply this for your distribution. I'd say that the full maximum of your distribution is around 0.08, so the half maximum is 0.04. Now all the need to figure out is the width at that height, which I'd say is approximately 10. Using the formula above, we find that $$\sigma\approx \frac{10}{2.36}\approx4.24.$$