You should first accept the fact that it's an elliptic integral, and therefore doesn't have an elementary expression without elliptic functions. If you had a numerical library with elliptic functions, then great. Otherwise, you need to either implement elliptic functions yourself, or implement numerical integration of your integral.
I recommend numerical integration, just because in context it is conceptually simple and reliable. Your integrand has a fairly tame form: It can't blow up, the integrand is continuous, and the integrand is also real analytic unless it touches zero. In this situation, Gaussian integration has excellent properties. I don't feel like doing a precise calculation, but I would expect that for any choice of the coefficients, Gaussian quadrature with just 5 evaluation points already has to be fairly close to the exact answer for any choices of the coefficients.
The above is part of an answer, but not a complete answer you really want 64 bits of accuracy. Assuming that the integrand is real analytic, Gaussian quadrature or Clenshaw-Curtis will converge exponentially. It seems reasonable enough to use Clenshaw-Curtis, which lets you recycle evaluation points and has a predictable formula for the numerical integration weights, with more and more points until the answer looks accurate.
The only problem is in the annoying case in which the integrand touches zero, or comes close to touching zero, which can be interpreted geometrically as a point on the spline with nearly zero velocity. (Typically it looks like a cusp.) Then the integrand is NOT real analytic and these numerical methods do not converge exponentially. Or, in the near-zero case, the integrand is analytic but the exponential rate of convergence is slow. I'm sure that there are tricks available that will handle this case properly: You could cut out an interval near the bad point and do something different, or you could subtract off a known integral to tame the bad point. But at the moment I do not have specific advice for an algorithm that is both reasonable fast and reasonably convenient. Clenshaw-Curtis is convenient and usually very fast for this problem, but not all that fast in bad cases if you push it to 64 bits of precision.
Also, these methods can be thought of as a more sophisticated version of chordal approximation. Chordal approximation faces the same issue, except worse: It never converges at an exponential rate for a non-trivial cubic spline. If you want 64 bits, you might need a million chords.
Meanwhile, the GNU Scientific Library does have elliptic function routines. If you have elliptic functions, then again, your integral is not all that simple, but it is elementary. I don't know whether GSL or equivalent is available for your software problem. If it is, then an elliptic function formula is probably by far the fastest (for the computer, not necessarily for you).
In a recent comment, bpowah says "All I wanted to know is whether or not it was faster to compute the given integral numerically or exactly." Here is a discussion. Computing an integral, or any transcendental quantity, "exactly" is an illusion. Transcendental functions are themselves computed by approximate numerical procedures of various kinds: Newton's method, power series, arithmetic-geometric means, etc. There is an art to coding these functions properly. A competitive implementation of a function even as simple as sin(x) is already non-trivial.
Even so, I'm sure that it's faster in principle to evaluate the integral in question in closed form using elliptic functions. It could be hard work to do this right, because the first step is to factor the quartic polynomial under the square root. That already requires either the quartic formula (unfortunately not listed in the GNU Scientific Library even though it has the cubic) or a general polynomial solver (which is in GSL but has unclear performance and reliability). The solution also requires elliptic functions with complex arguments, even though the answer is real. It could require careful handling of branch cuts of the elliptic functions, which are multivalued. With all of these caveats, it doesn't seem worth it to work out an explicit formula. The main fact is that there is one, if you have elliptic functions available but not otherwise.
The merit of a numerical integration algorithm such as Gaussian quadrature (or Clenshaw-Curtis, Gauss-Kronrod, etc.) is that it is vastly simpler to code. It won't be as fast, but it should be quite fast if it is coded properly. The only problem is that the integrand becomes singular if it reaches 0, and nearly singular if it is near 0. This makes convergence much slower, although still not as slow as approximation with chords. With special handling of the near-singular points, it should still be fine for high-performance numerical computation. For instance, a polished strategy for numerical integration might well be faster than a clumsy evaluation of the relevant elliptic functions.
A cubic bezier defined by $p_1, p_2, p_3, p_4$ has parametric equation $$B(t) = (1-t)^3p_1 + 3(1-t)^2tp_2 + 3(1-t)t^2p_3 + t^3p_4.$$
The setup here also defines $A(t) = (1-t) p_2 + tp_3$.
The way $C$ is defined, there are some real $s(t)$ and $u(t)$, both possibly depending on $p_1,\ldots,p_4$ such that $C = sA + (1-s)B = up_1 + (1-u)p_4$.
So $B - C = B - sA - (1-s)B = s(B-A)$. Hence $\frac{|B - C|}{|A - B|} = |s|$.
On the other hand, we want $sA + (1-s)B - up_1 - (1-u)p_4 = 0$. That comes out to
$$((1-s)(1-t)^3 - u)p_1 + (s(1-t) + 3(1-s)t(1-t)^2)p_2 + (st + 3(1-s)t^2(1-t))p_3 + ((1-s)t^3 - (1-u))p_4 = 0.$$
Set $$s = \frac{t^3+(1-t)^3-1}{t^3 + (1-t)^3}$$ and $$u = \frac{(1-t)^3}{t^3 + (1-t)^3}.$$
Then the coefficents of $p_1,\ldots,p_4$ in the above expression become identically 0. Note that the denominators of these expressions are never 0 for $t \in [0,1]$, so the divisions are ok.
So your ratio is given by the $|s|$ above (or its reciprocal, depending on how you're taking the ratio).
Best Answer
If you have a Bezier curve $(x(t),y(t))$, the closest point to the origin (say) is given by the minimum of $f(t) = x(t)^2 + y(t)^2$. By calculus, this minimum is either at the endpoints or when the derivative vanishes, $f'(t) = 0$. This latter condition is evidently a quintic polynomial. Now, there is no exact formula in radicals for solving the quintic. However, there is a really nifty new iterative algorithm based on the symmetry group of the icosahedron due to Doyle and McMullen. They make the point that you use a dynamical iteration anyway to find radicals via Newton's method; if you think of a quintic equation as a generalized radical, then it has an iteration that it just as robust numerically as finding radicals with Newton's method.
Contrary to what lhf said, Cardano's formula for the cubic polynomial is perfectly stable numerically. You just need arithmetic with complex numbers even if, indeed exactly when, all three roots are real.
There is also a more ordinary approach to finding real roots of a quintic polynomial. (Like Cardano's formula, the Doyle-McMullen solution requires complex numbers and finds the complex roots equally easily.) Namely, you can use a cutoff procedure to switch from divide-and-conquer to Newton's method. For example, if your quintic $q(x)$ on a unit interval $[0,1]$ is $40-100x+x^5$, then it is clearly close enough to linear that Newton's method will work; you don't need divide-and-conquer. So if you have cut down the solution space to any interval, you can change the interval to $[0,1]$ (or maybe better $[-1,1]$), and then in the new variable decide whether the norms of the coefficients guarantee that Newton's method will converge. This method should only make you feel "a little dirty", because for general high-degree polynomials it's a competitive numerical algorithm. (Higher than quintic, maybe; Doyle-McMullen is really pretty good.)
See also this related MO question on the multivariate situation, which you would encounter for bicubic patches in 3D. The multivariate situation is pretty much the same: You have a choice between polynomial algebra and divide-and-conquer plus Newton's method. The higher the dimension, the more justification there is for the latter over the former.