[Math] Main open computational problems in quantifier elimination

ag.algebraic-geometrycomputer-algebralo.logicmodel-theoryreal-algebraic-geometry

A language is said to have quantifier elimination if every first-order-logic sentence in the language can be shown to be equivalent to a quantifier-free sentence, i.e., a sentence without any $\forall$s or $\exists$s. An example is the theory of real closed fields (such as $\mathbb{R}$), considered with the four basic operations, equality ($=$) and inequalities ($<$, $>$).

Question: how fast could an algorithm that does quantifier elimination be? By how much are current algorithms (such as the ones that proceed by cylindrical decomposition) worse than the best algorithms that are conceivably possible? What are, in brief, the main open computational problems in quantifier elimination?

(We can try to restrict the discussion to $\mathbb{R}$, though other "useful" theories also interest me.)

As far as I know, the situation is as follows: in general, there are first-order sentences on $k$ variables that are not equivalent to any quantifier-free sentences of length less than $\exp(\exp(C k))$; this means, in particular, that the worst-case performance of a quantifier elimination program has to be at least doubly exponential on $k$. This is matched by cylindrical-decomposition algorithms (correct me if I am wrong). At the same time, if the original formula contains only $\exists$s or only $\forall$s, then an algorithm that is singly exponential on $k$ is known. (I'm going by a very quick reading of Basu, Pollack, Roy, Algorithms in Real Algebraic Geometry; all errors are my own.)

The second case – on which exponential bounds are known – is important, since it covers all cases of the form "prove this formula holds for all $x_1, x_2,\cdots, x_k$".

Is this the end of the story, or is there a subarea where plenty of work could remain to be done?


Well, there seems to be real interest in this question, but no answer as such yet. Let me suggest what would be very nice as an answer: a few open problems on the subject, hard but not impracticable, with statements that are neat enough for mathematicians yet also close enough to actual practice that their solution would likely be useful.

For example: would reducing the existential theory of $\mathbb{R}$ to $k$-SAT be such a problem?

Best Answer

The problem of the current approach in this area, exemplified by the book by Basu, Pollack, Roy, "Algorithms in Real Algebraic Geometry" is that one ends up with the simplest case: checking non-emptiness of a real algebraic set $S$, and the only well-analysed class of algorithms to solve it leads to chasing roots of a univariate polynomial of degree at least the number of connected components of $S$. Even worse, if $S=V_{\mathbb{R}}(f_1,\dots,f_k)$ then instead of looking at the ideal $(f_1,\dots,f_k)\in\mathbb{R}[x_1,\dots, x_n]$ one looks at $V_{\mathbb{R}}(\sum_{j=1}^k f_j^2)$, which leads to blowing up degrees by 2.

An interesting topic might be to try to bridge it with the sums of squares-based methods in real algebraic geometry. (Which are in practice computationally more feasible, by using semidefinite programming). The latter can be read about in e.g. "Semidefinite Optimization and Convex Algebraic Geometry".

There are many open problems in the latter itself, some of them of number-theoretic flavour. E.g., understand when a polynomial $f$ with rational coefficients $f=\sum_{j=1}^m h_j^2 \in\mathbb{Q}[x_1,\dots, x_n]$, with each $h_j\in\mathbb{R}[x_1,\dots, x_n]$ (i.e. $f$ is a sum of squares of polynomials with real coefficients) can be a sum of squares of polynomials with rational coefficients, $f=\sum_{j=1}^{m'} g_j^2$, with each $g_j\in\mathbb{Q}[x_1,\dots, x_n]$. The state of the art on this problem is here in the talk by C.Scheiderer.

A much more famous open problem there is the complexity of the feasibility problem for linear matrix inequalities: decide the emptiness of $$S(A_0,\dots,A_m):=\{x\in\mathbb{R}^m\mid A_0+\sum_{j=1}^m x_j A_j\succeq 0\},$$ where $A_j$ are symmetric matrices, and $B\succeq 0$ stands for "$B$ is positive semidefinite". See e.g. this. There has been no progress on this problem since Ramana's 1997 paper. The related problem is semidefinite programming, a.k.a. SDP: minimize a linear function on $S(A_0,\dots,A_m)$. It is a natural generalization of linear programming (LP), and has become very popular in various areas due to its expressive powers. Chapter 2 of this book is an accessible introduction to SDP.

The relation between the SDP and the feasibility problem for linear matrix inequalities is akin to the relation between LP and feasibility problem for "ordinary" linear inequalities. Namely, one can solve the minimization problem efficiently iff one can solve the feasibility problem efficiently, this is well-known as "equivalence of optimization and separation". Khachiyan and Porkolab wrote a paper where one can find a number of constructions illustrating how much more delicate $S(A_0,\dots,A_m)$ are, compared to "ordinary" polyhedra.

Related Question