Short answer: Yes; have a look at, e.g., Algorithms in Real Algebraic Geometry by Basu, Pollack and Roy or Fundamental problems in Algorithmic Algebra by Yap, where the CAD approach for quantifier elimination and U-resultants are introduced.
(Slightly) longer answer, with (hopefully) some intuition on why it works: When computing the resultant w.r.t. $y$ ($x$) of a system of two equations in two variables, you eliminate one variable and conceptually project the solutions in $\mathbb C^2$ onto their $x$- ($y$-)values, which are the zeros of the respective resultant polynomial. You then construct a grid of candidate solutions from those coordinates, and check which points are actually proper solutions.
The same idea allows you to eliminate any variable from any pair of two equations in any number $n$ of variables, and the solutions of the $n-1$-variate elimination polynomial determine the candidate set of solutions for those $n-1$ variables. Do this for all pairs $(f_1, f_2), \dots, (f_{n-1}, f_n)$, and you end up with $n-1$ polynomials in $n-1$ variables. Work you way down until you end up with only one variable, and determine the candidates for this projection; then proceed with another way down to get the next variable.
As in the bivariate case, once you have all the candidates for each coordinate, set up the ($n$-dimensional) grid and verify for each point in the grid whether it is a solution of the original system or not.
To your comment: For simplicity, I sketch the three-variate case over an algebraically closed base field $k$ (e.g., $k=\mathbb{C}$) and assume generic position (that is, no solutions at infinity and that the input polynomials are pairwise coprime).
The resultant $R_{i,j} = \operatorname{res}(f_i,f_j; x_3) \in k[x_1,x_2]$ vanishes at $(\alpha,\beta)$ if and only if there exists an $\gamma \in k$ such that $f_i(\alpha,\beta,\gamma) = f_j(\alpha,\beta,\gamma) = 0$. Hence, the roots of the resultant are the projections of the common zeros of $f_i$ and $f_j$ onto the $x_1x_2$-plane. Those are, in general, different values for $R_{1,2}$, $R_{1,3}$ and $R_{2,3}$.
One step further, $R_{(i,j),(i',j')} = \operatorname{res}(R_{i,j},R_{i',j'}; x_2)$ vanishes at $\alpha$ iff there is a $\beta$ such that $R_{i,j}(\alpha,\beta) = R_{i',j'}(\alpha,\beta) = 0$. This in turn means that there are $\beta, \gamma$ and $\gamma'$ such that $f_i(\alpha,\beta,\gamma) = f_j(\alpha,\beta,\gamma) = 0$ and $f_{i'}(\alpha,\beta,\gamma') = f_{j'}(\alpha,\beta,\gamma') = 0$, but $\gamma$ does not need to be the same value as $\gamma'$! Hence, the $R_{(i,j),(i',j')}$ will only share some roots.
However, the projections of the common solutions of the entire system $f_1 = f_2 = f_3 = 0$ are necessarily a part of all resultant polynomials occuring throughout the procedure. In particular, if you descend down to the one-variate projections onto the $x_i$-axes, the gcd of all the univariate elimination polynomials w.r.t. $x_i$ will vanish at each projection of a solution onto the respective coordinate.
Below you can find some proof-of-concept Sage code which shows the idea with very basic means:
n = 3 # nr of variables (and equations)
d = 3 # degrees of the polynomials
n, d = 2, 4
n, d = 4, [2,2,1,1]
if not type(d) is list:
d = [d]*n
prec = 1024 # precision used for the verification of solutions (exact computation is too expensive without additional machinery)
CIF = ComplexIntervalField (prec)
R = PolynomialRing (ZZ, 'x', n)
x = R.gens()
f = [ R.random_element (x=-256, y=256, degree=d[i], terms=binomial (n+d[i],n)) for i in range(n) ]
print ("Input:\n================================================================")
for i in range(n):
print "f{} = {}".format (i, f[i])
import itertools
from sage.rings.polynomial.real_roots import real_roots
from sage.rings.polynomial.complex_roots import complex_roots
isolator = lambda p: real_roots (p, retval='algebraic_real')
isolator = lambda p: p.roots (RealIntervalField (prec)) # much faster, but without guarantees
# isolator = lambda p: complex_roots (p, retval='algebraic_complex') # too slow
isolator = lambda p: p.roots (CIF) # much faster, but without guarantees
res = { (i, ()): f[i] for i in range(n) }
for elim_order in itertools.permutations (range(n)):
print ("Computing resultant polynomials for elimination order {}...".format (elim_order))
eliminated = []
pairs = list (itertools.combinations (range(n), 2))
for k in elim_order[:-1]:
eliminated.append (k)
for pair in pairs:
p = res[pair[0], tuple(eliminated[:-1])]
q = res[pair[1], tuple(eliminated[:-1])]
res[pair, tuple(eliminated)] = p.resultant (q, x[k])
pairs = list (itertools.combinations (pairs, 2))
elim = {}
for i in range(n):
print ("Computing gcd of univariate elimination polynomials for variable %s..." % x[i])
elim[i] = gcd (v for k,v in res.items()
if len (k[1]) == n-1 # all but one variable eliminated
and i not in k[1]) # x_i left
candidates = {}
for i in range(n):
print ("Computing roots of the elimination polynomial for %s..." % x[i])
candidates[i] = isolator (elim[i].univariate_polynomial())
print ("Computing the values of the system's polynomials at the candidate solutions...")
sols = []
for cand in itertools.product (*candidates.itervalues()):
coord = map (lambda x: x[0], cand)
coord_CIF = map (CIF, coord)
mult = map (lambda x: x[1], cand)
values_CIF = [ f[i](coord_CIF) for i in range(n) ]
print ("\tF{} ~= {}". format (map (ComplexIntervalField(32), coord_CIF),
map (ComplexIntervalField(32), values_CIF)))
if all (v.contains_zero() for v in values_CIF):
print ("\t\t=> report as solution")
sols.append ((coord, mult))
print ("\n(Approximately verified) solutions:\n================================================================")
for sol in sols:
print ("{}; mult {}".format (map (ComplexIntervalField(32), sol[0]), sol[1]))
Best Answer
The diagonalization argument can indeed be generalized. Roughly, one can see all elements of $\mathbb{N}$ appear in order by going through all hyperplanes $X_1+\ldots+X_n = s$.
Now for the proof. Let $s=X_1+...+X_n$. Then your function $f_n$ can be written as $$ f_n(X_1, ..., X_n) = \binom{s+n-1}{n} + f_{n-1}(X_1, \ldots, X_{n-1}), $$ with the conventions that $f_n(0,\ldots, 0) = 0$ and $f_0 = 0$.
Claim: Fix $s\in \mathbb{N}$. Then $f_n$ induces a bijection $$ \Big\{ (X_1, \ldots, X_n)\in \mathbb{N}^n \ | \ s=X_1+...+X_n \Big\} \xrightarrow{f_n} \Big[ \binom{s+n-1}{n}, \binom{s+n}{n} -1\Big], $$ where $[x,y]$ is the set of integers from $x$ to $y$, and if $s=0$, then the set on the right is $[0,0]$ by convention.
Proof of the claim. It is obviously true for $n=1$. Assume it is true for $n-1$. Let us show it is true for $n$.
For a fixed $s=X_1+...+X_n$, the value $t=X_1 + \ldots + X_{n-1}$ can be anything from $0$ to $s$, depending on the value of $X_n$. Thus, by the hypothesis on $f_{n-1}$, it induces a bijection $$ \Big\{ (X_1, \ldots, X_n)\in \mathbb{N}^n \ | \ s=X_1+...+X_n \Big\} \xrightarrow{f_{n-1}} \Big[ 0, \binom{s+n-1}{n-1} -1\Big] $$ defined by $(X_1, \ldots, X_n) \mapsto f_{n-1}(X_1, \ldots, X_{n-1})$. Thus $f_n$ induces a bijection from the set on the left to the interval $\Big[\binom{s+n-1}{n}, \binom{s+n-1}{n} + \binom{s+n-1}{n-1}-1\Big]$. Since $\binom{s+n-1}{n} + \binom{s+n-1}{n-1} = \binom{s+n}{n}$, the claim is proved.
The fact that $f_n$ is a bijection from $\mathbb{N}^n$ to $\mathbb{N}$ then follows immediately from the claim.