[Math] looking for proof or partial proof of determinant conjecture

co.combinatoricsmatrices

Math people:

I am looking for a proof of a conjecture I made. I need to give two definitions. For distinct real numbers $x_1, x_2, \ldots, x_k$, define $\sigma(x_1, x_2, \ldots, x_k) =1$ if $(x_1, x_2, \ldots, x_k)$ is an even permutation of an increasing sequence, and $\sigma(x_1, x_2, \ldots, x_k) =-1$ if $(x_1, x_2, \ldots, x_k)$ is an odd permutation of an increasing sequence. For example, $\sigma(2, 1, 10, 8) = 1$ because $(2,1,10,8)$ is an even permutation of $(1,2,8,10)$, and $\sigma(2, 1, 8, 10) = -1$ because $(2,1,8,10)$ is an odd permutation of $(1,2,8,10)$. For real $B$, $n\geq 1$ and distinct real numbers $\mu_1, \mu_2, \ldots, \mu_n, \gamma_1, \gamma_2, \ldots, \gamma_n$, let $M(B;\mu_1,\mu_2,\ldots,\mu_n;\gamma_1,\gamma_2,\ldots,\gamma_n)$ be the $n$-by-$n$ matrix defined by

$$ M(B;\mu_1,\mu_2,\ldots,\mu_n;\gamma_1,\gamma_2,\ldots,\gamma_n)_{i,j}=\frac{\exp(-B\gamma_j)}{\mu_i+\gamma_j}+\frac{\exp(B\gamma_j)}{\mu_i-\gamma_j}. $$

My conjecture is the following: if $n \geq 1$, $B \geq 0$, and $\mu_1, \mu_2, \ldots, \mu_n, \gamma_1, \gamma_2, \ldots, \gamma_n$ are distinct positive numbers with
$0<\mu_1 < \mu_2 < \cdots < \mu_n$ and $0<\gamma_1 < \gamma_2 < \cdots < \gamma_n$ , then

$$\operatorname{sgn}(\operatorname{det}(M(B;\mu_1,\mu_2,\ldots,\mu_n;\gamma_1,\gamma_2,\ldots,\gamma_n))) = (-1)^{\frac{n(n+1)}{2}}
\sigma(\mu_1, \mu_2, \ldots, \mu_n, \gamma_1, \gamma_2, \ldots, \gamma_n). $$

Of course $\operatorname{sgn}(x)$ is the sign of $x$, which is $1$, $-1$, or $0$. I have proven this is true for $n=1$ and $n=2$. For $n$ between $3$ and $20$, I have run thousands of experiments in Matlab using randomly generated $\mu$'s and $\gamma$'s. In a set of one thousand experiments, the conjectured equation will typically hold every single time, or might fail once or twice, with the determinant (with the wrong sign) being extremely small, so perhaps roundoff error is the culprit.

UPDATE: let $d(B)$ be the determinant of the matrix, where the other parameters should be clear from context. $d(B)$ is an analytic function of $B$. It suffices to show that $\frac{\partial^m d}{\partial B^m}$ has the desired sign at $B=0$ for all $m \geq 0$. Unfortunately, the determinant of $\frac{\partial M}{\partial B}$ is not the same thing as $\frac{\partial^m d}{\partial B^m}$ (if it were, properties of Cauchy matrices would yield the desired conclusion). Since the conjecture is true for $n=1$, that means that the displayed formula for $M_{i,j}$ above, and all its derivatives with respect to $B$, have the same sign as $\mu_i – \gamma_j$ at $B=0$, and $M_{i,j}$ has that sign for all positive $B$. I proved the conjecture for $n=2$, by computing the determinant of $M$ and its derivatives with respect to $B$ at $B=0$, and looking at the six possible orderings of $\mu_1, \mu_2, \gamma_1$, and $\gamma_2$ given the restrictions $\mu_1 < \mu_2$ and $\gamma_1 < \gamma_2$. I had some help from Maple multiplying out, simplifying and factoring algebraic expressions. I am trying to prove the general case by induction on $n$, expanding the determinant along the last row or column, but the determinants of the $n-1$-by-$n-1$ minors don't seem to necessarily have the “right'' signs.

Thanks to some comments provided below, unless I am confused, the conjecture can be proven for $B=0$ and large positive $B$, for any $n$, using properties of Cauchy matrices.

Best Answer

I have a proof if all the $\mu$'s are larger than all of the $\gamma$'s. I've spent a while trying to figure out how modify this proof to work for other orderings, but I'm giving up. This argument is inspired by a computation of the Caucy determinant by Doron Zielberger (skip to the 25 minute mark).

We start with the identity $$e^{\mu B} \int_B^{\infty} e^{- \mu t} (e^{\gamma t} + e^{-\gamma t}) dt = \frac{e^{-B \gamma}}{\mu+\gamma} + \frac{e^{B \gamma}}{\mu-\gamma}.$$ Note that we need $\mu > \gamma$ in order for the integral to converge; that is why I can't extend this proof to any other ordering of the $\mu$'s and $\gamma$'s

So the determinant is $$\det \left( e^{\mu_i B} \int_{t_i=B}^{\infty} e^{-\mu_i t_i} (e^{\gamma_j t_i} + e^{- \gamma_j t_i}) d t_i \right).$$ Note that we are using $n$ different integration variables -- one for each row of the matrix. By the multilinearity of the determinant, this is $$\exp\left(\sum B \mu_i \right) \times \int_{t_1=B}^{\infty} \int_{t_2=B}^{\infty} \cdots \int_{t_n=B}^{\infty} \exp\left( - \sum \mu_i t_i \right) \det \left( e^{\gamma_j t_i} + e^{- \gamma_j t_i} \right) dt_1 dt_2 \cdots dt_n.$$

Note that permuting $(t_1, \ldots, t_n)$ only changes $ \det \left( e^{\gamma_j t_i} + e^{- \gamma_j t_i} \right)$ by a sign, but changes $\exp(\mu_i t_i)$ to $\exp(\mu_{\sigma(i)} t_i)$ for some permutation $\sigma$. Lumping together all $n!$ reorderings of the $t_i$, the integral is $$\int_{B \leq t_1 \leq \cdots \leq t_n} \sum_{\sigma \in S_n} (-1)^{\sigma} \exp(\sum t_i \mu_{\sigma(i)}) \det \left( e^{\gamma_j t_i} + e^{- \gamma_j t_i} \right) dt_1 \cdots dt_n $$ $$= \int_{B \leq t_1 \leq \cdots \leq t_n} \det(e^{\mu_j t_i}) \det \left( e^{\gamma_j t_i} + e^{- \gamma_j t_i} \right) dt_1 \cdots dt_n.$$

We claim that the integrand is positive for $0 < t_1 < \cdots < t_n$ so the integral is positive, as desired. So we are done once we prove:

Lemma If $\mu_1 < \mu_2 < \cdots < \mu_n$ and $\gamma_1 < \gamma_2 < \cdots < \gamma_n$ and $t_1 < t_2 < \cdots < t_n$, then $$\det \left( e^{\mu_j t_i} \right) \ \mbox{and} \ \det \left( e^{\gamma_j t_i} + e^{- \gamma_j t_i} \right)$$ are positive.

Proof It is enough to show that these determinants don't vanish anywhere in this range, since they then must have constant sign and it is easy to check that the correct sign is positive. If the first determinant vanished, then there would be some nonzero function $\sum a_j z^{\mu_j}$ which vanished at $z = e^{t_1}$, $e^{t_2}$, ... $e^{t_n}$. But then this is a "polynomial with real exponents" having $n$ nonzero terms, and $n$ postive roots, contradicting Descartes rule of signs. (I wrote out a proof of Descartes' rule of signs for real exponents here.)

Similarly, if the second determinant vanishes, then we get a real-exponent-polynomial $\sum b_j (z^{\gamma_j} + z^{-\gamma_j})$ with $2n$ terms and roots at the $2n$ points $e^{\pm t_i}$. Again, a contradiction.

Related Question