[Math] Generalization of the positive semidefinite Grothendieck inequality

fa.functional-analysisinequalities

In a recent paper, S. Khot and A. Naor show a natural generalization of the positive semidefinite Grothendieck's inequality. Grothendieck showed that there exists a constant $K > 0$ such that for every $n \times n$, symmetric semidefinite matrix $A=[a_{ij}]$, the following inequality holds:

$$\max_{x_1,\ldots,x_n} \sum_{ij} a_{ij}x_i^Tx_j \le K \max_{\epsilon_1,\ldots,\epsilon_n \in [-1,1]} \sum_{ij}a_{ij} \epsilon_i\epsilon_j,$$
where each $x_i$ is a unit vector (in Euclidean norm, so that $x_i^Tx_i=1$).

Khot and Naor studied a natural variant of this inequality, where the $n$ numbers $\epsilon_i$ are replaced by $n$ vectors $u_1,\ldots,u_n$ chosen from a set of $k < n$ unit vectors $v_1,\ldots,v_n$. The inequality becomes:

$$\max_{x_1,\ldots,x_n} \sum_{ij} a_{ij}x_i^Tx_j \le C \max_{u_1,\ldots,u_n \in \{v_1,\ldots,v_k\}} \sum_{ij}a_{ij} u_i^Tu_j.$$

They proved that this inequality actually holds, but the proof looks very complicated; as does the constant $C$. My question is thus in two parts:

(a) Is there any chance that there is a simpler proof for this "natural" generalization to Grothendieck's inequality?

(b) Does it seem like a feasible project to try and estimate this constant?

I am not an expert in these inequalities, but this generalization looks so interesting, that I had to seek out expert opinion.

Best Answer

Perhaps it would be beneficial to summarize the proof in words:

We wish to prove a certain Grothendieck-type inequality. We begin with a Lemma.

In Lemma 3.2 of this paper, we find an expression for a certain probability. In some sense, this calculation is really just (Hermite) Fourier analysis. We have a certain function, and we expand it into a (Hermite) Fourier basis. This proves our lemma.

We now use the probabilistic method to calculate the expected value of a certain random inner product. Using positive definiteness of the matrix A, and applying Lemma 3.2, we can throw away all but the "first level" (Hermite) Fourier coefficients. This gives our desired inequality, which ends up being sharp.

By the way of analogy, think about Hurwitz's proof of the Isoperimetric Inequality using Fourier series. We have an arbitrary curve, we expand it into its Fourier components, and we see that the inequality becomes sharp when all but the "first level" Fourier coefficients are set to zero. In a way, the same proof idea is used here.

As for computing the constant, this is indeed a challenge even in the simplest nontrivial cases. I don't think estimating it is very hard (for say, a "rough" estimate). To see what I mean, I would recommend reading the first in the series, "Approximate kernel clustering" (by the same authors).

Related Question