[Math] SDP relaxation vs LP relaxation

graph theoryinteger programminglinear programmingnonlinear optimizationsemidefinite-programming

I have a question I hope you might be able to answer.

Let's say we have an integer program for the stable set problem (or clique, not principal).

\begin{equation}
\begin{aligned}
& \text{maximize} & \sum_i x_i \\
& \text{subject to} & \\
%& & \sum_{i \in C} x_i \leq 1 \text{ for all cliques } C\\
& & x_i+x_j \leq 1 \text { for } i,j \in E \\
& & x_i \in \{0,1\}
\end{aligned}
\end{equation}

One can use relaxation to obtain upper bounds like linear programming (LP) relaxation (relaxing integer variables to be in $[0,1]$) or using SDP relaxation (Lovasz)

\begin{equation}
\begin{aligned}
& \text{maximize} & \sum_i \sum_j X_{ij} \\
& \text{subject to} & \\
& & \mbox{tr} (X) = 1 \\
& & X_{ij} = 0 \text{ if } \{i,j\} \in E(G) \\
& &X \succ 0
\end{aligned}
\end{equation}

Is there a nice direct proof that the value of the SDP relaxation above is always at most the value of the LP relaxation of the integer program?

Any help and thoughts will be greatly appreciated.

Edit: formal description

So let's say the stable set number is $\alpha(G)$, the value of linear relaxation is $z^*_{LP}$, and the value of SDP is $z^*_{SDP}$. The formal statement about "better" would be that $\alpha(G) \leq z^*_{SDP} \leq z^*_{LP}$ for every graph $G$.

Best Answer

The quickest proof I know uses a different (but equivalent) formulation of the above SDP. If I had to prove it using the above version I would probably first convert it the other one. Here is a rough sketch: Let $X$ be a solution to the SDP above, and let $D$ be the diagonal matrix with the entries of $X$ on its diagonal. Consider $Y = D^{-1/2}XD^{-1/2}$ and note that this is PSD with 1's on the diagonal and zero where $X$ was zero. Let $d$ be the vector of the square roots of the diagonal entries of $X$. Then $d^TYd = \sum_{i,j}X_{ij}$ and $d$ is a unit vector. So the max eigenvalue of $Y$ is at least the objective value of $X$. Since $Y$ is PSD, it is the Gram matrix of some (unit) vectors $v_1, \ldots, v_n$. The max eigenvalue of $Y$ is the same as that of the matrix $Z = \sum_{i} v_iv_i^T$. Let $c$ be a max eigenvector of $Z$ with eigenvalue $\mu$. If we assign the value $(v_i^Tc)^2$ to the vertex $i$, then this will be a feasible solution for the linear relaxation of the above integer program since on any clique the $v_i$ are a subset of some orthonormal basis and $c$ is a unit vector. Moreover, the value of this solution is equal to $\sum_{i} (v_i^Tc)^2 = c^TZc = \mu \ge \sum_{i,j} X_{ij}$. Thus the linear relaxation has greater value than the SDP.

As you can see, this is not so nice. At first I thought you could maybe prove that if you take any solution $X$ to the SDP, and then assign to vertex $i$ the sum of the $i^\text{th}$ row of $X$, that this would be a feasible solution to the linear relaxation. But this doesn't work (some row sums can be greater than 1). But maybe it works for optimal solutions to the SDP?

Another way to prove it is to consider a Gram factorisation $u_1, \ldots, u_n$ of a feasible solution $X$ to the SDP. So $X_{ij} = u_i^Tu_j$. Let $v_i$ be the unit vector parallel to $u_i$. Also let $c$ be the unit vector parallel to $\sum_i u_i$. Then assigning $(v_i^Tc)^2$ to vertex $i$ is a feasible solution to the linear relaxation, and its value is at least the objective value of $X$. This last part requires a proof which is part of the proof of Theorem 5 from Lovasz' original paper (http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1055985). There are a lot of other useful equivalent formulations of the above SDP in his paper. There is also a paper by Knuth about these things (http://www.combinatorics.org/ojs/index.php/eljc/article/view/v1i1a1).

It would be nice to have a more direct proof than those above.

Related Question