[Math] Solving a System of Quadratic Equations

mathematical-softwarena.numerical-analysispolynomialsquadratic-forms

I have many polynomial equations in many variables which I want to jointly minimize (in a mean square sense, but you could pick a different reasonable measure which favors anything where all quantities go to zero). For example, I am looking to make the 6 equations below as "small" as possible (a-j are unknown real numbers).

This example probably actually has a solution where all equations are zero, but I also have cases which have no zero solution, so I'd rather not do the "repeatedly eliminate variables and solve for the quadratic root" approach (also, this approach takes too long; is there even any machine which could find a full zero for these equations within 10 minutes?). I'm thinking there might be some software tool that considers the "terrain" smartly and is locally minimizing on many global fronts…or maybe that is impractical. So, is there a free math tool (like Sage) which can minimize things for me (and be certain that no other point is better within some tolerance)? I'm open to theoretical advice, but feel like the options will all look like brute force.

Should I give up if I need to minimize a similar set of equations within 10 minutes on one machine?

a^2 + b^2 + c^2 - 4.52
0.136*a^2 - 0.15*a + d^2 + e^2 + f^2 - 3.84
1.12*a^2 + 0.415*a + b^2 - 2.0*b*d + c^2 - 2.0*c*e + d^2 + e^2 + f^2 - 0.593
0.602*a^2 - 0.0411*a*b - 0.851*a*d - 0.94*a + 0.634*b^2 - 0.489*b*d + 0.219*b + 0.407*d^2 + 0.588*d + h^2 + i^2 + j^2 - 0.612
0.0676*a^2 - 0.0495*a*b + 0.258*a*d + 0.155*a + 0.095*b^2 + 0.14*b*d + 0.157*b + c^2 - 2.0*c*h + 0.407*d^2 + 0.588*d + h^2 + i^2 + j^2 - 2.0
0.813*a^2 - 0.192*a*b - 0.843*a*d - 1.01*a + 0.634*b^2 - 2.03*b*d + 0.302*b + 2.04*d^2 + 0.212*d + e^2 - 2.0*e*h + f^2 - 2.0*f*i + h^2 + i^2 + j^2 - 2.76

Best Answer

There is one fancy way specific for the quadratics. Suppose you want to minimize $\sum_n Q_n(x)^2$ where $Q_n$ are quadratic forms. Set the problem as $$ \sum y_n^2\to \min, y_n-Q_n(x)=0 $$ and consider both the objective and the conditions as quadratic forms of $z=(x,y)$. The Lagrange multiplier theorem tells that if you have a local minimum of $F(z)$ under the conditions $G_n(z)=0$, then we can find $q_n\in\mathbb R$ such that $$ F_q(z)=F(z)+\sum_n q_n G_n(z)=(A(q)z,z)-2(B(q),z)+c $$ attains a global minimum at the same point. (note that $c$ does not depend on $q=(q_n)$!) Moreover, the global minimum in the original problem corresponds to $q$ such that $(A(q)^{-1}B(q),B(q))$ is minimal and both the domain of admissible $q$ ($A(q)\ge 0$) and the functional $q:\mapsto (A(q)^{-1}B(q),B(q))$ are convex, so the methods of convex optimization work. The catch is that the Lagrange problem degenerates when you have multiple solutions in the original problem, so you'll easily determine the value of the minimum of $F(z)$ but not immediately the value of $z$. That would require tracing the degenerate plane. However, this is quick enough and easy to program. Whether it has been done already in a standard package, I cannot tell. I suggested this scheme to some guy on MSE for a very particular case and he seemed to be happy with it: https://math.stackexchange.com/questions/468576/least-squares-fit-with-a-trick/

Related Question