Both you and the book made a mistake, but the book's mistake is large, and an error of principle, while your mistake is just simple arithmetic.
First, you should get a feel for the errors involved: the mass error and the v error is negligible, because they are of order a percent or two, while the error in the difference in x, value .8m, is .14m, as you calculated, it is about 15%. This is something you should be aware of--- when you subtract approximately equal quantities, the errors amplify, because the fractional error is what is important, and the quantity becomes smaller.
In your expression,
${\delta F\over F} = \sqrt{(\frac{0.5\ \mathrm{kg}}{54.0\ \mathrm{kg}})^2 + 2 \times (\frac{0.2\ \mathrm{ms}^{-1}}{6.3 \ \mathrm{ms}^{-1}})^2 + (\frac{0.1414\ \mathrm{m}}{0.8\ \mathrm{m}})^2} \approx 0.133$
You didn't get the right answer. The answer is almost exactly equal to the square root of the last term, or
$${\delta F\over F} = {.14 \over .8} = .18$$
The actual error is 18%, not 13%. The remaining terms make this a little bit bigger, but not much. You made an error of arithmetic, which could have been avoided by noting that the last term, the error in $\Delta X$, is the only important one.
But the book did the following brain-damaged error estimate: they took the two values of X, and treated the plus/minus error as something you add or subtract to the quantity to find the biggest and smallest value it can have. Then they took the "boundary" values by adding/subtracting .1 from each, to get a largest/smallest value $\Delta X$:
$$\Delta X_s = (4.7 - .1) - ( 3.9 + .1 ) = .6$$
$$\Delta X_l = (4.7 + .1) - ( 3.9 - .1) = 1.0$$
This gives a 40% error. This procedure is wrong on principle, because the errors in the two x values are independent, and it is extremely unlikely that they will align to be exactly opposite. The correct estimate is that the error is 18%, for both $\Delta X$ and the final answer.
The modern framework for the evaluation of measurement uncertainty is given by a series of guides prepared by the Joint Committees for Guides in Metrology (JCGM) that can be found on the BIPM website.
The idea is that a measurand is modelled by a random variable (even in the case of a single measurement), and a measured quantity value $x$ is thus considered as a realization of a random variable $X$. The result of a measurement should then represent the information available about the random variable associated to a measurand, either as a probability density function or, more succinctly, as a representative value (e.g., the mean or the median) and a measure of uncertainty (spread) like the standard deviation. Scientific judgement and statistics (classic or Bayesian) offer techniques to assign probabilities to events and properties to random variables.
When the spread of a random variable $X$ is expressed as a standard deviation, it is called standard uncertainty and denoted by $u(x)$.
In the case of an indirect measurement modelled by a measurement function of the type $Y=f(X_1,\ldots,X_n)$ the law of propagation of uncertainty for uncorrelated input quantities $X_1,\ldots,X_n$ is given by
$$u(y) = \sqrt{\sum_{k=1}^n\left(\frac{\partial f}{\partial x_k}\right)^2u^2(x_k)}.$$
There are only two assumptions underlying the above equations: i) that the input quantities $X_1,\ldots,X_n$ are uncorrelated (there is also a more general formula for correlated quantities); ii) that the spread of the input quantities is sufficiently small that the function $f$ can be approximated as a first-order Taylor series (there are methods for the propagation of uncertainty when the linearity assumption fails). You don't need to assume a Gaussian distribution. The above equation should be used whether the uncertainty of the input quantities has been evaluated from statistics on repeated measurements or from scientific judgement on a single measurement.
In this framework, you can assign a probability distribution to a ruler measurement by using scientific judgement. In the language of metrology this is called a Type B evaluation of the uncertainty.
Now, let's suppose that we have a ruler with marks that are 1 mm apart. If you tell me that you measured a length of, say, 100 mm, I can think: given the marks, if the length were greater than 100.5 mm, he would have said 101 mm; conversely, if the length were less than 99.5 mm, he would have said 99 mm. Without further information, I can model the length $l$ as a random variable $L$ with uniform distribution between 99.5 mm and 100.5 mm, with half witdh $\delta l=0.5\,\mathrm{mm}$. The standard uncertainty would then be (see ยง4.3.7 of the GUM)
$$u(l) = \frac{\delta l}{\sqrt{3}} \approx \frac{0.5\,\mathrm{mm}}{\sqrt{3}}\approx 0.3\,\mathrm{mm}$$
However, I can also think in a more refined way: If the length is 100.5 mm, there is a 50% chance that you say 100 mm and 50% chance that you say 101 mm, and when increasing the length between 100.5 mmm and 101 mm there is less and less probability that you say 100 mm (the same when going from 100 mm to 99). So, a better assumption for the probability distribution of $L$ would be that of a triangle distribution with half width $\delta l=1\,\mathrm{mm}$. With this assumption, the uncertainty evaluation yields a slightly greater result. In fact, from the linked GUM document, figure 2(b), we have
$$u(l) = \frac{\delta l}{\sqrt{6}} \approx \frac{1\,\mathrm{mm}}{\sqrt{3}}\approx 0.4\,\mathrm{mm}$$
Best Answer
When you're combining measurements with different uncertainties, taking the mean is not the right thing to do. (Well, it's good enough if the uncertainties are almost the same.)
The right thing to do is chi-squared analysis, which gives a higher weight to the more accurate measurements. Here's how you do it:
$$\chi^2 = \sum \frac{(\text{observed value} - \text{true value})^2}{\text{(uncertainty associated with that observation)}^2}$$
You numerically choose the "true value" that minimizes $\chi^2$. That's your best guess.
Next, use the chi-square distribution to calculate the p-value (assuming the best guess is right). (Degrees of freedom is one less than the number of observations.) This will tell you whether your uncertainties were reasonable or whether you underestimated them. For example, if one measurement is $5.0 \pm 0.1$, and another measurement is $10.0 \pm 0.1$, then you probably underestimated your uncertainties.
IF you underestimated your uncertainties -- which is not unusual in practice -- then the right thing to do is to figure out where you went wrong in your uncertainty estimation, and correct the mistake. But there is a lazier alternative too, which is often good enough if the stakes are low: You can scale up all the uncertainties by the same factor until you get a reasonable $\chi^2$ p-value, say 0.5.
OK, now you have plausible measurement uncertainties, either because you did from the beginning or because you scaled them up. Next, you try varying the "true value" until the p-value dips down below, say, 5%. This procedure gives you lower bound and upper bound error bars on your final best-guess measurement.
I haven't done this in many years, sorry for any mis-remembering. I think it's discussed in Bevington&Robinson.