[Math] Finding the length of a cubic B-spline

closed-form-expressionselliptic functionsna.numerical-analysisspecial functionssplines

Can I find an analytical solution to the the length of an 2-dimensional cubic B-spline? All I can find are chorded approximations and the opinion that the analytic solution is "unbearably gruesome". However, I believe if I had the solution to this elliptic integral: $\int_0^u \sqrt{\left(b_{1} + 2 b_{2} t + 3 b_{3} t^{2}\right)^{2} + \left(a_{1} + 2 a_{2} t + 3 a_{3} t^{2}\right)^{2}} dt$, I'd have my answer (at least on a per-segment basis). My goal is implement it in some software, so a little up-front grue is ok. Is it really not worth pursuing? I can't believe I'm better off with a numerical solution. I'm after 64-bit accuracy if that helps with the discussion.

Sympy cannot integrate it. I have not tried Math-CAD, Maxima, or other CASs. I do not have any significant math background (if that's not already obvious).


Edit (background info for Greg):

This spline will be used as a control-surface for conserved quantities in a physics analysis. Flux of these quantities must also be known. The spline must be second-derivative continuous and must pass through specified and an arbitrary number of knot points. A cubic B-spline seems right for the job. It is also required that the length of the spline or any arbitrary interval along the spline also be determinable to a near-machine-precision value. Unfortunately, this needs to be done quickly because the knots of the spline and the parameterization intervals are themselves being converged-upon to meet criteria which is dependent on these lengths. And to make matters worse, this is a time-transient solution (which means another level of iteration).

I am using Python to prototype this on a 64-bit multi-core PC. (I wince as I type 'Python' because it has a reputation for being slow.) However, I am leveraging numpy (for basic array math), scipy (for FITPACK, QUADPACK, etc.), and possibly sympy (for elliptical functions) for the heavy lifting. I plan to (if I can ever find an exact closed-form "gruesome" solution) to write it up in C or FORTRAN and compile it as it's own module.

It has to run on a PC as it is not a one-time-use algorithm. It will be used by others many times over in a design environment. (I guess that's another level of iteration again.)

Best Answer

You should first accept the fact that it's an elliptic integral, and therefore doesn't have an elementary expression without elliptic functions. If you had a numerical library with elliptic functions, then great. Otherwise, you need to either implement elliptic functions yourself, or implement numerical integration of your integral.

I recommend numerical integration, just because in context it is conceptually simple and reliable. Your integrand has a fairly tame form: It can't blow up, the integrand is continuous, and the integrand is also real analytic unless it touches zero. In this situation, Gaussian integration has excellent properties. I don't feel like doing a precise calculation, but I would expect that for any choice of the coefficients, Gaussian quadrature with just 5 evaluation points already has to be fairly close to the exact answer for any choices of the coefficients.


The above is part of an answer, but not a complete answer you really want 64 bits of accuracy. Assuming that the integrand is real analytic, Gaussian quadrature or Clenshaw-Curtis will converge exponentially. It seems reasonable enough to use Clenshaw-Curtis, which lets you recycle evaluation points and has a predictable formula for the numerical integration weights, with more and more points until the answer looks accurate.

The only problem is in the annoying case in which the integrand touches zero, or comes close to touching zero, which can be interpreted geometrically as a point on the spline with nearly zero velocity. (Typically it looks like a cusp.) Then the integrand is NOT real analytic and these numerical methods do not converge exponentially. Or, in the near-zero case, the integrand is analytic but the exponential rate of convergence is slow. I'm sure that there are tricks available that will handle this case properly: You could cut out an interval near the bad point and do something different, or you could subtract off a known integral to tame the bad point. But at the moment I do not have specific advice for an algorithm that is both reasonable fast and reasonably convenient. Clenshaw-Curtis is convenient and usually very fast for this problem, but not all that fast in bad cases if you push it to 64 bits of precision.

Also, these methods can be thought of as a more sophisticated version of chordal approximation. Chordal approximation faces the same issue, except worse: It never converges at an exponential rate for a non-trivial cubic spline. If you want 64 bits, you might need a million chords.


Meanwhile, the GNU Scientific Library does have elliptic function routines. If you have elliptic functions, then again, your integral is not all that simple, but it is elementary. I don't know whether GSL or equivalent is available for your software problem. If it is, then an elliptic function formula is probably by far the fastest (for the computer, not necessarily for you).


In a recent comment, bpowah says "All I wanted to know is whether or not it was faster to compute the given integral numerically or exactly." Here is a discussion. Computing an integral, or any transcendental quantity, "exactly" is an illusion. Transcendental functions are themselves computed by approximate numerical procedures of various kinds: Newton's method, power series, arithmetic-geometric means, etc. There is an art to coding these functions properly. A competitive implementation of a function even as simple as sin(x) is already non-trivial.

Even so, I'm sure that it's faster in principle to evaluate the integral in question in closed form using elliptic functions. It could be hard work to do this right, because the first step is to factor the quartic polynomial under the square root. That already requires either the quartic formula (unfortunately not listed in the GNU Scientific Library even though it has the cubic) or a general polynomial solver (which is in GSL but has unclear performance and reliability). The solution also requires elliptic functions with complex arguments, even though the answer is real. It could require careful handling of branch cuts of the elliptic functions, which are multivalued. With all of these caveats, it doesn't seem worth it to work out an explicit formula. The main fact is that there is one, if you have elliptic functions available but not otherwise.

The merit of a numerical integration algorithm such as Gaussian quadrature (or Clenshaw-Curtis, Gauss-Kronrod, etc.) is that it is vastly simpler to code. It won't be as fast, but it should be quite fast if it is coded properly. The only problem is that the integrand becomes singular if it reaches 0, and nearly singular if it is near 0. This makes convergence much slower, although still not as slow as approximation with chords. With special handling of the near-singular points, it should still be fine for high-performance numerical computation. For instance, a polished strategy for numerical integration might well be faster than a clumsy evaluation of the relevant elliptic functions.

Related Question