[Math] A determinant inequality

determinantsinequalities

Notation: Suppose $\mathbf{A}$ and $\mathbf{B}$ are positive definite matrices in $\mathbb{R}^{n\times n}$ such that $\mathbf{A} \succeq \mathbf{B}$ (Loewner order). Let $\mathcal{S}(n,k)$ be the set of all $k$-subsets of $\{1,2,\dots,n\}$. For any $\mathcal{Q} \subset [n] \triangleq \{1,2,…,n\}$, $\mathbf{M}_\mathcal{Q}$ is the matrix obtained by deleting the rows and columns of matrix $\mathbf{M}$ whose indices are in $\mathcal{Q}$. Similarly, $\mathbf{x}_\mathcal{Q}$ is the vector obtained by deleting the rows of vector $\mathbf{x}$ whose indices are in $\mathcal{Q}$.

Conjecture: For any such $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{x} \in \mathbb{R}^n$ and for all $k \in [n-1]$:

$$
\frac{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{A}_\mathcal{Q} + \mathbf{x}_\mathcal{Q}\mathbf{x}_\mathcal{Q}^\top)}{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{A}_\mathcal{Q})}
\leq
\frac{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{B}_\mathcal{Q} + \mathbf{x}_\mathcal{Q}\mathbf{x}_\mathcal{Q}^\top)}{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{B}_\mathcal{Q})}
\tag{1}
$$

Progress so far:

  1. We have $\det(\mathbf{A}_\mathcal{Q}+\mathbf{x}_\mathcal{Q}\mathbf{x}_\mathcal{Q}^\top) = \det(\mathbf{A}_\mathcal{Q})\,(1+\mathbf{x}_\mathcal{Q}^\top \mathbf{A}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q})$. Therefore, (1) can be rewritten as (2):
    $$
    \tag{2}
    \sum_{\mathcal{Q} \in \mathcal{S}(n,k)}\frac{ \det(\mathbf{A}_\mathcal{Q})\,}{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{A}_\mathcal{Q})} \,
    \mathbf{x}_\mathcal{Q}^\top \mathbf{A}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q}
    \leq
    \sum_{\mathcal{Q} \in \mathcal{S}(n,k)}\frac{ \det(\mathbf{B}_\mathcal{Q})\,}{\sum_{\mathcal{Q} \in \mathcal{S}(n,k)} \det(\mathbf{B}_\mathcal{Q})} \,
    \mathbf{x}_\mathcal{Q}^\top \mathbf{B}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q}
    $$

  2. Lemmas: It is easy to show that for any $\mathcal{Q} \subset [n]$:

    • (2.1) $\quad \mathbf{A} \succeq \mathbf{B} \Rightarrow \mathbf{A}_\mathcal{Q} \succeq \mathbf{B}_\mathcal{Q}$
    • (2.2) $\quad \det(\mathbf{A}_\mathcal{Q}) \geq \det(\mathbf{B}_\mathcal{Q})$
    • (2.3) $\quad \mathbf{A}^{-1}_\mathcal{Q} \preceq \mathbf{B}_\mathcal{Q}^{-1} \Rightarrow \mathbf{x}_\mathcal{Q}^\top \mathbf{A}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q} \leq \mathbf{x}_\mathcal{Q}^\top \mathbf{B}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q}$
    • (2.4) $\quad \det(\mathbf{A}_\mathcal{Q})\,\mathbf{x}_\mathcal{Q}^\top \mathbf{A}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q} \geq \det(\mathbf{B}_\mathcal{Q})\,\mathbf{x}_\mathcal{Q}^\top \mathbf{B}_\mathcal{Q}^{-1}\mathbf{x}_\mathcal{Q} $
  3. Through recursion, it suffices to show that (1) or (2) hold for the special case of $\mathbf{A} = \mathbf{B} + \mathbf{p}\mathbf{p}^\top$ for some $\mathbf{p} \in \mathbb{R}^n$.
  4. A very special case is posted here ($\mathbf{B} = \mathbf{I}$, $\mathbf{A} = \mathbf{I} + \mathbf{p}\mathbf{p}^\top$).
  5. I haven't encountered any counterexample after running "many" simulations (I know this is not a strong argument).

Update: Let me explain the motivation as requested.

Motivation:

Suppose $\{\mathbf{a}_i\}_{i=1}^{m}$ are some vectors in $\mathbb{R}^n$ ($m \geq n$) and $\mathbf{A}_0 \succ \mathbf{0}$ is a given matrix in $\mathbb{R}^{n \times n}$. Now consider,
$$
\begin{align}
f_k : 2^{[m]} &\to \mathbb{R}, \\
\mathcal{S} & \mapsto c_k(\mathbf{A}_0 + \sum_{i \in \mathcal{S}}\mathbf{a}_i\mathbf{a}_i^\top)
\end{align}
$$
where $c_k(\mathbf{A})$ is the coefficient of $x^k$ in the characteristic polynomial of $\mathbf{A}$, i.e., $\det(x\mathbf{I} – \mathbf{A})$.

Conjecture: $f_k$ is monotone log-submodular (multiplicative submodular) for all $k \in \{0,1,\dots,n\}$.

I have proved the monotonicity (maybe for $|f_k|$).

Special Cases: This holds for $k=n-1$ (trace), $k=0$ (determinant is log-submodular) and $k=n$ (constant, $f_n(\mathcal{S}) = 1$).

Now (1) emerges from the proof of log-submodularity (multiplicative submodularity) of $f_k$ after expressing $c_k$ as the sum of determinants of principal minors.

Applications:

  1. I came across this when working on graph Laplacian matrices. $f_k$ for Laplacian matrices (and $\mathbf{a}_i = \mathbf{e}_s – \mathbf{e}_r$ where $\{\mathbf{e}_s\}_{s=1}^n$ is the standard basis) is related to the weighted number of spanning trees. I recently showed that the weighted number of spanning trees is a monotone log-submodular function of the edge set (see a draft here). Other coefficients can be also be nicely related to the weighted number of spanning trees as shown by Alexander Kelmans (as a generalization of Kirchhoff's matrix tree theorem). For Laplacian matrices, (2) has a beautiful interpretation in terms of the expected value of the effective resistance ("distance") between two vertices after performing some random operations on the graph.

  2. (1) and (2) also arise in $k$-DPPs (determinantal point process).

Best Answer

As suspected, the desired inequality actually holds for all hyperbolic polynomials; the inequality in the OP follows as corollary (Corollary 1) to Theorem 2 (which seems to be new).

We will need the following remarkable theorem to obtain our result.

Theorem 1 (Bauschke, Güler, Lewis, Sendov, 2001) Let $p$ be a homogenous hyperbolic polynomial; let $v$ be a vector in the strict interior of the hyperbolicity cone $\Lambda_{++}$ of $p$. Then, \begin{equation*} g(x) := \frac{p(x)}{Dp(x)[v]} \end{equation*} is concave on $\Lambda_{++}$.

This theorem helps prove the more general inequality (also conjectured by Denis Serre above).

Theorem 2. Let $p$ be a homogenous hyperbolic polynomial with hyperbolicity cone $\Lambda_{++}$. Let $a, b, c \in \Lambda_{++}$. Then, $p$ satisfies the (conic log-submodularity) inequality: \begin{equation*} \tag{0} p(a)p(a+b+c) \le p(a+b)p(a+c). \end{equation*}

Proof. Let $c \in \Lambda_{++}$ be arbitrary. Consider the function $f(a) := \frac{p(a+c)}{p(a)}$. Inequality (0) amounts to showing that $f(a)$ is monotonically decreasing on the cone $\Lambda_{++}$. Equivalently, we consider $\log f$ and show that its derivative is negative in the direction $v$. That is, for an arbitrary direction vector $v\in \Lambda_{++}$, we show that \begin{equation} \tag{1} \frac{Dp(a+c)[v]}{p(a+c)} - \frac{Dp(a)[v]}{p(a)} \le 0\quad\Longleftrightarrow\quad \frac{p(a+c)}{Dp(a+c)[v]} \ge \frac{p(a)}{Dp(a)[v]}. \end{equation} But from Theorem 1, we know that $\frac{p(x)}{Dp(x)[v]}$ is concave. Moreover, since $p$ is homogenous, from its concavity we obtain its superadditivity \begin{equation*} \frac{p(a+c)}{Dp(a+c)[v]} \ge \frac{p(a)}{Dp(a)[v]} + \frac{p(c)}{Dp(c)[v]}, \end{equation*} which is stronger than the desired monotonicity inequality (1) (since all terms are nonnegative).

Corollary 1. Let $E_k(A) = e_k \circ \lambda(A)$ denote the $k$-th elementary symmetric polynomial of a positive definite matrix $A$. Then for any positive definite $A, B, C$ we have \begin{equation*} E_k(A)E_k(A+B+C) \le E_k(A+B)E_k(A+C). \end{equation*} This log-submodularity, immediately implies the log-submodularity of $f_k(S) := E_k(A+\sum\nolimits_{i\in S}v_iv_i^T)$.