Numerical Instability in Projective-based Barycentric Coordinates in High Dimensions

barycentric-coordinatesconvex-hullsprojective-geometryriemann-surfacessimplex

Initially, I tried a method to check if a ray intersects a hyperplane, and got it working in 7D Cartesian coordinates, but am running into (I think) numerical instability issues sometimes with 7D and more often in 8D (8D is what I need, also see discussion with author of [1]). One of the main issues I am experiencing is sometimes no intersecting facet is detected (i.e. at least one of every set of barycentric coordinates was negative).

I implemented an alternative approach as described in Robust Barycentric Coordinates Computation of the Closest Point to a Hyperplane in E^n (Skala 2013), which I hoped would address the issue, but the results were nearly identical (i.e. within approx. numerical precision of each other) if I first projected the datapoint onto the hyperplane per 1 and much worse if I used a unit norm datapoint on the surface of the n-sphere (e.g. 6-sphere or 7-sphere) as input. In the end, I'm trying to do interpolation in octonion space (7-sphere) using barycentric coordinates which has involved:

  1. generating points on the surface of an n-sphere (or hyperorthant) based on MATLAB file exchange file hypersphere
  2. computing the convex hull of points on the surface of an n-sphere MATLAB N-D convexhull function convhulln
  3. identifying the nearest neighbor in convex hull of a random datapoint MATLAB N-D NN search dsearchn
  4. identifying all simplices with that nearest neighbor as a vertex
  5. projecting the point onto each hyperplane (i.e. simplex) and computing barycentric coordinates ($ \lambda $)
  6. identify the hyperplane where all $ \lambda_i \geq 0 $

I think I have narrowed down the issue to either 2. or 5., in that either the convex hull is incorrect due to numerical instability or the projection of the point onto the correct hyperplane is not close enough to the hyperplane to result in all positive barycentric coordinates, respectively. I'm leaning towards the latter, but even with increased precision (64-digit precision via MATLAB variable precision arithmetic vpa), no intersecting facet is found.

Any suggestions for a more numerically stable projection of a ray onto the hyperplane would be much appreciated as well as any other general comments/suggestions on this approach.

EDIT: Something else I've also considered is using some kind of spatial indexing scheme to identify the intersecting facet perhaps using linear inequalities in spherical coordinates or some type of projection (e.g. onto a hypercube or hyperbox). I think a spatial indexing scheme should be possible (and much faster), but the details are a bit elusive to me.

Best Answer

The issue is actually with step #3 in that one of the simplices connected to the nearest neighbor will not necessarily contain the datapoint if the meshing is non-uniform. As a simple example in 2D with a "thin" triangle (ABC), an adjacent approximately equilateral triangle (BCD), and a datapoint E residing in BCD close to the midpoint of BC as follows:

2D Example

Vertex A is the nearest neighbor to E but is not connected to the triangle that contains the datapoint. Intuitively and empirically, it seems this issue is exacerbated in higher dimensions which is what initially pointed me towards looking at numerical instability.

When it comes to Tomilov's vs. Skala2013's approach, Tomilov's approach was faster (probably because I use symbolic computation of determinants to implement Skala's approach), and from what I can tell numerical instability is not an issue in either case for the problem I presented as long as the projection to the hyperplane in Tomilov's approach is used. It seems likely that in higher dimensions when numerical instability would eventually cause issues, using Tomilov's approach to project the datapoint onto the plane and then using that as input to Skala's approach will be very robust as it avoids use of the \ operator in favor of n-ary cross products. By omitting the extensive symbolic computation I used, it should also be much faster.

To retain most of the speedup associated with considering only a subset of simplices for each datapoint while still accounting for non-uniform meshes, I use the nearest neighbor approach and repeat step #5 with all simplices when no intersecting simplex is found (keep in mind, the number of adjacent simplices tends to scale with dimensions). A faster approach would be to progressively look at the next nearest neighbor & connecting simplices while ignoring simplices that have already been checked until the intersecting simplex is identified.

Related Question