Step one is find the mean. The mean is not the average of $(a, b, c)$ because you have to weigh the more accurate measurements higher. That is done as follows:
$$ d \equiv \langle d\rangle= \frac{\frac{a}{\Delta a^2}+\frac{b}{\Delta b^2}+\frac{c}{\Delta c^2}}{\frac{1}{\Delta a^2}+\frac{1}{\Delta b^2}+\frac{1}{\Delta c^2}}$$
Now if any $\Delta = 0$, this formula fails. If only one error is zero, then its associated value is the correct answer, and the other measurements are garbage. If two values are zero, and their associated measurements disagree, then you don't understand your uncertainties, at all. If all three are zero, then set them to 1--as your error analysis isn't working.
In order to calculate the standard deviation, you start with computing the second moment, which is the weighted average of the squares:
$$ d_2 \equiv \langle d^2\rangle= \frac{\frac{a^2}{\Delta a^2}+\frac{b^2}{\Delta b^2}+\frac{c^2}{\Delta c^2}}{\frac{1}{\Delta a^2}+\frac{1}{\Delta b^2}+\frac{1}{\Delta c^2}}$$
From here, the weighted standard deviation, $\sigma$, is computed in the usual fashion:
$$ \sigma^2 = d_2 - d^2 $$
For unweighted data, the error on the mean (the standard error), is computed from the standard deviation via:
$$ stderr = \frac{\sigma}{\sqrt N} $$
where $N$ is the number of measurements. However, you don't have $N=3$ measurements, because they are weighted. For instance, if $\Delta a = \Delta b = 1$ and $\Delta C = 1,000,000$, that 3rd measurement is useless: you have 2 good measurements. You can verify with the above formulae that the measurement of $c$ is worthless.
So, you have to replace $N$ with the effective number of measurements:
$$ N_{eff}= \frac{(\frac{1}{\Delta a^2}+\frac{1}{\Delta b^2}+\frac{1}{\Delta c^2})^2}{(\frac{1}{\Delta a^2})^2+(\frac{1}{\Delta b^2})^2+(\frac{1}{\Delta c^2})^2}$$
You can verify that if all errors are equal, this reduced to $3$.
"Do physicists just use the word standard deviation to refer to uncertainty?"
Often we assume that results of our measurements are normal distributed (we can argue that, if we don't know the reason for the deviation from the "real" value, then it is most likely due to many factors and if you have many arbitrarily distributed factors influencing a variable, then that variable follows the normal distribution - central limit theorem). Then we can use some measure of the width of the normal distribution as our uncertainty, e.g. the std-deviation. But of course you are basically free in choosing what you use, one sigma might be ok now, but often multiples of sigma are used. You might also know that whatever you are measuring is in fact not normal distributed, then you would have to choose some other measure of uncertainty. So when it comes to uncertainties there is no one-size-fits-all solution. However, Gaussian error propagation based on standard deviations is the go-to if there are no reasons against it and in that case uncertainty and some multiple of sigma would be the same thing.
Now to the question what values to put in for the sigmas. Let me mention, that $\sqrt{\frac{1}{n-1}\sum_i\left(x_i - \bar{x}\right)^2}$ is not the standard deviation but an estimator for the "real" standard deviation of the distribution, that itself has an uncertainty (if it were the real value of the standard deviation, that formula should give the same result for every sample). So "why don't we plug in the standard deviations of the distributions"? Because you might have a better guess for the standard deviation, than the estimator above.
"Wouldn't this mean that you could manipulated the standard deviation σ just by what values you choose for your uncertainties." Yes, you can. Usually you would have to describe in detail why you chose some measure of uncertainty and others might be critical of your choice and contest your results because of that.
Best Answer
Regarding your first question about rms error:
Say the true value of $X$ is $\bar{X}$, and you measured $X_i$ (which on average should be $\bar{X}$).
The measurement error would be: $X_i - \bar{X}$.
The mean of the square of the errors would be $\langle (X_i - \bar{X})^2 \rangle $ which is exactly the variance.
The root of the mean of the squares is the square-root of the variance, meaning the standard deviation.
Second, after you had $n$ measurements you want to estimate $\bar{X}$, so you average your measurements and get $\langle X_i \rangle $. Of course, this cannot be equal precisely to $\bar{X}$ because all of these numbers are on a continuum. So how far off are you from the truth? The central limit theorem tells us that after taking enough measurement, no matter the distribution of $X_i$, your estimation will behave as a Gaussian with a mean of $\bar{X}$ and standard deviation of $\frac{\sigma}{\sqrt{n}}$, meaning the more you increase $n$, the narrower your gaussian will be and the closer your estimation will be to the truth. The intuition behind this is as @Physics Enthusiast answered.