[Math] Closest point on Bezier spline

curvesna.numerical-analysissplines

Given a two-dimensional cubic Bezier spline defined by 4 control-points as described here, is there a way to solve analytically for the parameter along the curve (ranging from 0 to 1) which yields the point closest to an arbitrary point in space?

$$
\mathbf{B}(t) = (1-t)^3 \,\mathbf{P}_0 + 3(1-t)^2 t\,\mathbf{P}_1 + 3(1-t) t^2\,\mathbf{P}_2 + t^3\,\mathbf{P}_3, ~~~~~ t \in [0,1]
$$

where P0, P1, P2 and P3 are the four control-points of the curve.

I can solve it pretty reliably and quickly with a divide-and-conquer algorithm, but it makes me feel dirty…

Best Answer

If you have a Bezier curve $(x(t),y(t))$, the closest point to the origin (say) is given by the minimum of $f(t) = x(t)^2 + y(t)^2$. By calculus, this minimum is either at the endpoints or when the derivative vanishes, $f'(t) = 0$. This latter condition is evidently a quintic polynomial. Now, there is no exact formula in radicals for solving the quintic. However, there is a really nifty new iterative algorithm based on the symmetry group of the icosahedron due to Doyle and McMullen. They make the point that you use a dynamical iteration anyway to find radicals via Newton's method; if you think of a quintic equation as a generalized radical, then it has an iteration that it just as robust numerically as finding radicals with Newton's method.

Contrary to what lhf said, Cardano's formula for the cubic polynomial is perfectly stable numerically. You just need arithmetic with complex numbers even if, indeed exactly when, all three roots are real.

There is also a more ordinary approach to finding real roots of a quintic polynomial. (Like Cardano's formula, the Doyle-McMullen solution requires complex numbers and finds the complex roots equally easily.) Namely, you can use a cutoff procedure to switch from divide-and-conquer to Newton's method. For example, if your quintic $q(x)$ on a unit interval $[0,1]$ is $40-100x+x^5$, then it is clearly close enough to linear that Newton's method will work; you don't need divide-and-conquer. So if you have cut down the solution space to any interval, you can change the interval to $[0,1]$ (or maybe better $[-1,1]$), and then in the new variable decide whether the norms of the coefficients guarantee that Newton's method will converge. This method should only make you feel "a little dirty", because for general high-degree polynomials it's a competitive numerical algorithm. (Higher than quintic, maybe; Doyle-McMullen is really pretty good.)

See also this related MO question on the multivariate situation, which you would encounter for bicubic patches in 3D. The multivariate situation is pretty much the same: You have a choice between polynomial algebra and divide-and-conquer plus Newton's method. The higher the dimension, the more justification there is for the latter over the former.

Related Question