[Math] Minimal distance between a point and a manifold M (M is the intersection of a ball and an ellipsoid)

na.numerical-analysis

Hi,

I have an elementary question concerning an extremal problem:

Given a manifold $M$:={$(x_1, x_2, x_3) $ | $x_1^2 + x_2^2 + x_3^2 = c$ and $\frac{x_1^2}{a_1^2} + \frac{x_2^2}{a_2^2} + \frac{x_3^2}{a_3^2}= d$ and max($a_1, a_2, a_3$) $> \sqrt{c} >$ min($a_1, a_2, a_3$) } consider an arbitrary point P($\tilde{x}_1, \tilde{x}_2, \tilde{x}_3$) with $M \cap P = \emptyset$.

Now I'm interested in the minimal distance between P and M. My idea was to use Lagrange:

Minimize f:= $||(\tilde{x}_1, \tilde{x}_2, \tilde{x}_3)-(x_1, x_2, x_3)||^2$ with the following boundary conditions:

1) $\varphi_1 := x_1^2 + x_2^2 + x_3^2 – c =0$

2) $\varphi_2 := \frac{x_1^2}{a_1^2} + \frac{x_2^2}{a_2^2} + \frac{x_3^2}{a_3^2} – d = 0$

=> $\Lambda (x_1, x_2, x_3, \lambda_1, \lambda_2)$ = f + $\lambda_1 \varphi_1 + \lambda_2 \varphi_2$.

Differentiating yields:

i) $2x_1 – 2\tilde{x_1} + 2\lambda_1 x_1 + 2 \lambda_2 \frac{x_1^2}{a_1^2} = 0$

ii) $2x_2 – 2\tilde{x_2} + 2\lambda_1 x_2 + 2 \lambda_2 \frac{x_2^2}{a_2^2} = 0$

iii) $2x_3 – 2\tilde{x_3} +2\lambda_1 x_3 + 2 \lambda_2 \frac{x_3^2}{a_3^2} = 0$

iv) $x_1^2 + x_2^2 + x_3^2 = c$

v) $\frac{x_1^2}{a_1^2} + \frac{x_2^2}{a_2^2} + \frac{x_3^2}{a_3^2}= d$

vi) max($a_1, a_2, a_3$) $> \sqrt{c} >$ min($a_1, a_2, a_3$)

vii) $P \cap M$ = $\emptyset$

Rearranging i)-iii) yields

$x_i\cdot (1+\lambda_1 + \frac{\lambda_2}{a_i^2})=\tilde{x_i}$ , thus
$x_i = \frac{\tilde{x_i}}{1+\lambda_1 + \frac{\lambda_2}{a_i^2}}$

Putting this into iv) and v) you get $\sum\limits_{i=1}^3 (\frac{\tilde{x_i}}{1+\lambda_1 + \frac{\lambda_2}{a_i^2}})^2 = c$ and
$\sum\limits_{i=1}^{3} ({\frac{\frac{\tilde{x_i}}{a_i^2}}{1+\lambda_1 + \frac{\lambda_2}{a_i^2} } })^2 = d$

Now I wish to solve this for the variables $\lambda_1, \lambda_2, x_1, x_2, x_3$, whereat these variables depend on $a_1, a_2, a_3, c, d, \tilde{x}_1, \tilde{x}_2$ and $\tilde{x}_3$.

I had no idea how to calculate an exact solution, so I wanted to use Newton's Method in order to calculate $\lambda_1$ and $\lambda_2$, but I unfortunately did not succeed in achieving the minimum of f, because there are many different solutions for the $\lambda_i$ and I don't know how I can be sure that Newton's Method found the minimal distance between P and M.

Best Answer

Lagrange method is based on a fact that extrema can occur only at points where the gradient of $f$ is a linear combination of gradients of the constraints -- i.e. at points where $\{\nabla f,\nabla g_1, \nabla g_2 \}$ is linearly dependent with coefficient in front of $\nabla f$ nontrivial. If the constraints are independent ($\nabla g_1 \neq c \nabla g_2$) then you can test this linear depenedence by determinant of the matrix $\begin{pmatrix} \partial_x f &\partial_y f &\partial_z f \\\ \partial_x g_1 &\partial_y g_1 &\partial_z g_1 \\\ \partial_x g_2 &\partial_y g_2 &\partial_z g_2\end{pmatrix}$. This gives you the third equations to your constaints $g_1=0$, $g_2=0$. This may be more amenable to numerical solutions. I've tried solving the resulting equations in Maple and it did produce a result, but it is too long to display. For example $z$ variable is a solution of a polynomial of degree 8 with coefficients being polynomial expressions of degree up to 16. One can go further and try to find Groebner basis for this system of polynomial equations, but I don't think that it'll lead to anything more useful than direct numerical solution.

Related Question