Here are some thoughts about this question - but not a definitive answer: I don't believe a definitive answer exists.
In general, error propagation is founded on the assumption that the distribution of the errors is Gaussian, and that the error is small compared to the value of the quantity. In that case, a simple propagation of errors is possible.
For example, assuming that the error is small, you should be able to take the derivative of your function with respect to each of the variables - then you can use that to determine the error in the result.
For example, your case of
$$F =\sqrt{A^2+B^2}$$
Derivative with respect to A, B:
$$\frac{\partial F}{\partial A} = \frac{2A}{\sqrt{A^2+B^2}}\\
\frac{\partial F}{\partial B} = \frac{2B}{\sqrt{A^2+B^2}}$$
If the error in A is $\Delta A$, and the error in B is $\Delta B$, then the total expected error is the sum of squares of errors:
$$\Delta F = \frac{\sqrt{(2A\Delta A)^2 + (2B\Delta B)^2}}{\sqrt{A^2+B^2}}$$
However, the moment you state that your distribution is not symmetrical, the situation changes. If you have a sufficient number of variables with "small but non Gaussian" error distributions, the central limit theorem tells us that the result will nonetheless be Gaussian distributed: in that case you can compute the standard deviation of the (non Gaussian) individual distributions, and use those as a surrogate in your error propagation calculation. But if you have a small number of variables (like in your example), AND the distribution is not Gaussian, then there is no method I'm aware of to solve the question analytically. It can, however, be addressed with a simple Monte Carlo simulation.
In a Monte Carlo simulation, you sample the distributions of your input variables, and transform them according to the formula; you can then plot the resulting output distribution, and compute its shape etc.
The upper and lower limits of the output can in principle be computed by setting the input variables to their extreme values (this is sometimes done for "worst case analysis"); but it is rare that that gives you any really useful insights, since error distributions most often describe something stochastic rather than deterministic (which means that an upper limit is almost never "hard"). And as I said - the moment you have more than a small number of variables with similar weights, the output distribution will start to look Gaussian.
First, I believe you mixed up the estimates and standard errors. Here is the output of my fit:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.2889 11.9553 1.446 0.198
x 30.0440 0.9565 31.409 6.92e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 10.29 on 6 degrees of freedom
Multiple R-squared: 0.994, Adjusted R-squared: 0.9929
F-statistic: 986.5 on 1 and 6 DF, p-value: 6.919e-08
Regarding your second question I believe you are interested in the uncertainty of the average temperature at $10V$. Dale's formula assumes that the fit parameters are independent, which is not really true, see here. However, using the above coeffs and your $\sigma_V=0.05V$ I obtain $\sigma_T(10 V) = \sqrt{ (10*0.9565)^2 + (30.0440*0.05)^2 + 11.9553^2} \mathrm{ °C} \approx 15.4 \mathrm{ °C}$. Using the more exact formula yields $\sigma_T(10V) = 4.1 \mathrm{ °C}$. That the uncertainty in the center of the fit is lower than at the edges is also visible in the following plot
Best Answer
This is because adding the squares of uncertainties and taking the square root (known as adding in quadrature) only works if the uncertainties are uncorrelated. In this case, the uncertainty of adjacent spacings is correlated: if you are uncertain as to a given peak voltage, then the spacing to both the peak on its left and to its right are affected by this. For example, if the peak were actually slightly further to the right, then the spacing on the right would decrease while that on the left would increase.
The general formula for any possible correlations is
$$\delta f\equiv\sqrt{\langle (f-\langle f\rangle)^2\rangle}=\sqrt{\sum_{i,j}\frac{\partial f}{\partial x_i}\frac{\partial f}{\partial x_j}\langle (x_i-\langle x_i \rangle)(x_j-\langle x_j\rangle)\rangle}$$
where the angle brackets denote an expectation value. Uncorrelated variables would have $\langle(x_i-\langle x_i \rangle)(x_j-\langle x_j\rangle)\rangle=0$ if $i\neq j$ which reduces to the formula you gave. However, in your case, $x_i\equiv\Delta v_{i}=v_{i+1} -v_{i}$, so adjacent spacings are correlated with each other because they both depend on the same $v_i$. Specifically, you would find that $\langle(\Delta v_{i}-\langle\Delta v_{i}\rangle)(\Delta v_{j}-\langle\Delta v_{j}\rangle)\rangle$ does not equal zero for $j=i\pm1$. Plugging it into the general formula with the function
$$f\equiv\bar{\Delta v} = \frac{\sum_i \Delta v_i}{N}$$
would hypothetically give the (correct) result
$$\delta(\bar{\Delta v}) = \frac{\sqrt{ \delta v_1^2 +\delta v_{N+1}^2}}{N}$$
though obviously you arrived at this via a much simpler method.