[Math] How to find solutions for four polynomial equations with four unknown variables using RESULTANT THEORY

polynomialsrootssystems of equations

Can I use resultant theory (or polynomial resultant method) to find solutions for FOUR simultaneous polynomial equations with FOUR unknown variables?

So far, I could only find examples which uses TWO equations having TWO unknown variables. I could also see an example of THREE unknown variables & THREE equations, but in that the first unknown was easily expressed as a function of other two variables.

The cases that I have come across for polynomials $f_1, f_2, …, f_n $ are

A) Solve: $f_1(x_1, x_2)=0$ and $f_2(x_1, x_2)=0$

B) Solve: $f_1(x_1, x_2, x_3)=0$ and $f_2(x_1, x_2, x_3)=0$ and
$x_1=g(x_2, x_3)$. Here, g(.) is known

I am looking for procedure/example for solving using resultant method cases like these:

C) Solve: $f_1(x_1, x_2, x_3)=0$ and $f_2(x_1, x_2, x_3)=0$
and $f_3(x_1, x_2, x_3)=0$

D) Solve: $f_1(x_1, x_2, x_3, x_4 )=0$ and $f_2(x_1, x_2, x_3, x_4)=0$
and $f_3(x_1, x_2, x_3, x_4)=0$ and $f_4(x_1, x_2, x_3, x_4)=0$

Thank you in advance for your kind help.

Best Answer

Short answer: Yes; have a look at, e.g., Algorithms in Real Algebraic Geometry by Basu, Pollack and Roy or Fundamental problems in Algorithmic Algebra by Yap, where the CAD approach for quantifier elimination and U-resultants are introduced.

(Slightly) longer answer, with (hopefully) some intuition on why it works: When computing the resultant w.r.t. $y$ ($x$) of a system of two equations in two variables, you eliminate one variable and conceptually project the solutions in $\mathbb C^2$ onto their $x$- ($y$-)values, which are the zeros of the respective resultant polynomial. You then construct a grid of candidate solutions from those coordinates, and check which points are actually proper solutions.
The same idea allows you to eliminate any variable from any pair of two equations in any number $n$ of variables, and the solutions of the $n-1$-variate elimination polynomial determine the candidate set of solutions for those $n-1$ variables. Do this for all pairs $(f_1, f_2), \dots, (f_{n-1}, f_n)$, and you end up with $n-1$ polynomials in $n-1$ variables. Work you way down until you end up with only one variable, and determine the candidates for this projection; then proceed with another way down to get the next variable.
As in the bivariate case, once you have all the candidates for each coordinate, set up the ($n$-dimensional) grid and verify for each point in the grid whether it is a solution of the original system or not.


To your comment: For simplicity, I sketch the three-variate case over an algebraically closed base field $k$ (e.g., $k=\mathbb{C}$) and assume generic position (that is, no solutions at infinity and that the input polynomials are pairwise coprime).

The resultant $R_{i,j} = \operatorname{res}(f_i,f_j; x_3) \in k[x_1,x_2]$ vanishes at $(\alpha,\beta)$ if and only if there exists an $\gamma \in k$ such that $f_i(\alpha,\beta,\gamma) = f_j(\alpha,\beta,\gamma) = 0$. Hence, the roots of the resultant are the projections of the common zeros of $f_i$ and $f_j$ onto the $x_1x_2$-plane. Those are, in general, different values for $R_{1,2}$, $R_{1,3}$ and $R_{2,3}$.
One step further, $R_{(i,j),(i',j')} = \operatorname{res}(R_{i,j},R_{i',j'}; x_2)$ vanishes at $\alpha$ iff there is a $\beta$ such that $R_{i,j}(\alpha,\beta) = R_{i',j'}(\alpha,\beta) = 0$. This in turn means that there are $\beta, \gamma$ and $\gamma'$ such that $f_i(\alpha,\beta,\gamma) = f_j(\alpha,\beta,\gamma) = 0$ and $f_{i'}(\alpha,\beta,\gamma') = f_{j'}(\alpha,\beta,\gamma') = 0$, but $\gamma$ does not need to be the same value as $\gamma'$! Hence, the $R_{(i,j),(i',j')}$ will only share some roots.

However, the projections of the common solutions of the entire system $f_1 = f_2 = f_3 = 0$ are necessarily a part of all resultant polynomials occuring throughout the procedure. In particular, if you descend down to the one-variate projections onto the $x_i$-axes, the gcd of all the univariate elimination polynomials w.r.t. $x_i$ will vanish at each projection of a solution onto the respective coordinate.


Below you can find some proof-of-concept Sage code which shows the idea with very basic means:

n = 3 # nr of variables (and equations)
d = 3 # degrees of the polynomials
n, d = 2, 4
n, d = 4, [2,2,1,1]

if not type(d) is list:
    d = [d]*n

prec = 1024   # precision used for the verification of solutions (exact computation is too expensive without additional machinery)
CIF = ComplexIntervalField (prec)

R = PolynomialRing (ZZ, 'x', n)
x = R.gens()
f = [ R.random_element (x=-256, y=256, degree=d[i], terms=binomial (n+d[i],n)) for i in range(n) ]

print ("Input:\n================================================================")
for i in range(n):
    print "f{} = {}".format (i, f[i])

import itertools
from sage.rings.polynomial.real_roots import real_roots
from sage.rings.polynomial.complex_roots import complex_roots

isolator = lambda p: real_roots (p, retval='algebraic_real')
isolator = lambda p: p.roots (RealIntervalField (prec)) # much faster, but without guarantees
# isolator = lambda p: complex_roots (p, retval='algebraic_complex') # too slow
isolator = lambda p: p.roots (CIF) # much faster, but without guarantees

res = { (i, ()): f[i] for i in range(n) }

for elim_order in itertools.permutations (range(n)):
    print ("Computing resultant polynomials for elimination order {}...".format (elim_order))
    eliminated = []
    pairs = list (itertools.combinations (range(n), 2))
    for k in elim_order[:-1]:
        eliminated.append (k)
        for pair in pairs:
            p = res[pair[0], tuple(eliminated[:-1])]
            q = res[pair[1], tuple(eliminated[:-1])]
            res[pair, tuple(eliminated)] = p.resultant (q, x[k])
        pairs = list (itertools.combinations (pairs, 2))

elim = {}
for i in range(n):
    print ("Computing gcd of univariate elimination polynomials for variable %s..." % x[i])
    elim[i] = gcd (v for k,v in res.items()
                   if len (k[1]) == n-1 # all but one variable eliminated
                   and i not in k[1])   # x_i left

candidates = {}
for i in range(n):
    print ("Computing roots of the elimination polynomial for %s..." % x[i])
    candidates[i] = isolator (elim[i].univariate_polynomial())

print ("Computing the values of the system's polynomials at the candidate solutions...")
sols = []
for cand in itertools.product (*candidates.itervalues()):
    coord = map (lambda x: x[0], cand)
    coord_CIF = map (CIF, coord)
    mult = map (lambda x: x[1], cand)
    values_CIF = [ f[i](coord_CIF) for i in range(n) ]
    print ("\tF{} ~= {}". format (map (ComplexIntervalField(32), coord_CIF),
                                  map (ComplexIntervalField(32), values_CIF)))
    if all (v.contains_zero() for v in values_CIF):
        print ("\t\t=> report as solution")
        sols.append ((coord, mult))

print ("\n(Approximately verified) solutions:\n================================================================")
for sol in sols:
    print ("{}; mult {}".format (map (ComplexIntervalField(32), sol[0]), sol[1]))
Related Question