[Math] Some Hankel Determinants

determinantshankel-matriceslinear algebramatrices

After invoking a recursion relation for Hankel determinants in my answer to a (mostly unrelated) question, I started wondering what else I could use this recursion for, and stumbled upon some results that surprised me. The proofs are purely computational, and I'm hoping someone can provide a more conceptual understanding.

So: Let $f(m)$ be a function defined on positive integers. The corresponding Hankel matrices are:

$${\cal H}(m,n)=\pmatrix{ f(m)&f(m+1)&\ldots&f(m+n)\cr
f(m+1)&f(m+2)&\ldots &f(m+n+1)\cr
\vdots&\vdots&&\vdots\cr
f( m+n)&f( m+n+1)&\ldots&f(m+2n)\cr}$$
and the Hankel determinants are
$H(m,n)=det({\cal H}(m,n))$.

In particular, set:
$$H(m,n)=\hbox{ the Hankel determinant associated to $f(m)=1/m$}$$
$$J(m,n)=\hbox{ the Hankel determinant associated to $f(m)=m!$}$$
$$K(m,n)=\hbox{ the Hankel determinant associated to $f(m)=1/m!$}$$

Also, let $c(n)=\prod_{i=1}^{n-1}i!$

Then I can show that:
$$H(m,n)={c(m+n-1)^2c(n)^2\over c(m-1)c(m+2n-1)}$$
$$J(m,n)={c(n)c(m+n)\over c(m)}$$
$$K(m,n)=\pm{c(n)c(m+n-1)\over c(m+2n-1)}$$
where the $\pm$ sign on $K(m,n)$ is $+$ or $-$ depending on whether $n$ is a square mod $4$.

These calculations immediately yield two theorems, each of which seems to cry out for a deeper explanation:

Theorem 1. $H(m,n)=J(m-1,n)K(m,n)$

Theorem 2. The $H(m,n)$ and $K(m,n)$ are all reciprocals of integers.

Question: Aside from the fact that they fall out of a calculation, why should we expect these theorems to be true?

For example, for Theorem 1, is there some natural interpretation of the product of two Hankel matrices that puts this in context? Or for Theorem 2, are the reciprocals of the $H(m,n)$ and $K(m,n)$ counting something?

Remark 1: Theorem 2 is certainly well-known for the $H(1,n)$, which are the determinants of the Hilbert matrices. I'm not sure whether it's well-known for the rest of the $H(m,n)$ or for the $K(m,n)$.

Remark 2: Theorem 2 would follow from the stronger statement that the Hankel matrices ${\cal H}(m,n)$ and ${\cal K}(m,n)$ (that is, the matrices of which the $H(m,n)$ and $K(m,n)$ are the determinants) have inverses with all integer entries. This, again, is well-known for $H(1,n)$ at least. I have both the outline of an argument and extensive numerical evidence to suggest that it is true for all the ${\cal H}(m,n)$ and all the ${\cal K}(m,n)$ but I fear that even if the argument pans out, it won't provide much conceptual understanding.

Best Answer

After comparing Steven's original example with his new example, I believe I have an interesting generalization of both. Let $c \in \mathbb{C}$. Define the following three functions: $$h(m)=\frac{1}{m-1+c},k(m) = \frac{1}{\Gamma(m+c)},$$ $$j(m)=\frac{h(m+1)}{k(m+1)}=\Gamma(m+c).$$

Let $\mathcal{H}(m,n),\mathcal{J}(m,n),\mathcal{K}(m,n)$ be the Hankel matrices corresponding to $h,j,k$, respectively. Let $H(m,n),J(m,n),K(m,n)$ be the respective determinants. Then the following holds: $$H(m,n) =\pm J(m-1,n) K(m,n).$$

  • When $c=1$, we recover the example from the original post.
  • When $c=\frac{1}{2}$, we recover Steven's second example, from his answer to the post (up to some normalization).

Now I turn to prove the generalization.

Lemma 1: Let $V$ be an inner-product space. Let $\{v_i\}_{i=1}^{n},\{\tilde{v}_i\}_{i=1}^{n}$ be two pairs of sequences of linearly independent vectors in $V$, spanning the same subspace. Similarly, let $\{u_i\}_{i=1}^{n},\{\tilde{u}_i\}_{i=1}^{n}$ be two pairs of sequences of linearly independent vectors in $V$, spanning the same subspace.

There exist unique scalars $x_{i,j}, y_{i,j}$ such that $\tilde{v}_i = \sum x_{i,k} v_k$ and $\tilde{u}_i = \sum y_{i,k} u_k$.

Define the following 4 matrices:

  • $A_{i,j} = \langle \tilde{v}_i, \tilde{u}_j \rangle$, $B_{i,j} = \langle v_i, u_j \rangle$
  • $X_{i,j} = x_{i,j}, Y_{i,j} = y_{i,j}$

Then $$A = X B Y^{T}.$$

Lemma 2: $\det(\binom{i+j+m}{i}_{0\le i,j \le n}) = 1$ for any $m \in \mathbb{C}$.

We work in the inner-product space $L^2([0,1])$. Note that $$\mathcal{H}(m,n)_{i,j} = \langle x^{i+m+c-2}, x^{j} \rangle$$ and that $$\mathcal{K}(m,n)_{i,j} = \langle \frac{x^{i+m+c-2}}{\Gamma(i+m-1+c)}, \frac{(1-x)^j}{\Gamma(j+1)} \rangle.$$

We are in a situation where we may apply Lemma 1 with $A=\mathcal{H}(m,n)$ and $ B=\mathcal{K}(m,n)$, which yields $$\mathcal{H}(m,n)= \text{Diag}((\Gamma(i+m-1+c))_{0\le i\le n}) \times \mathcal{K}(m,n) \times (\Gamma(i+1)\binom{j}{i}(-1)^i)_{0\le i,j \le n},$$ hence $$(*) \frac{H(m,n)}{K(m,n)} = \det \text{Diag}((\Gamma(i+m-1+c))_{0\le i\le n}) \det (\Gamma(i+1)\binom{j}{i}(-1)^i)_{0\le i,j \le n}.$$

On the other hand, $$\mathcal{J}(m-1,n) = (\Gamma(m-1+c+i+j))_{0 \le i,j \le n}) = \text{Diag}((\Gamma(i+m-1+c))_{0\le i\le n}) (\binom{m-2+i+j+c}{j})_{0 \le i,j \le n}) \text{Diag}(\Gamma(j+1)_{0\le j\le n}),$$ which, combined with Lemma 2, yields $$(**) J(m-1,n) = \det \text{Diag}((\Gamma(i+m-1+c))_{0\le i\le n}) \det\text{Diag}(\Gamma(j+1)_{0\le j\le n}).$$ We finish by noting that $$\det (\Gamma(i+1)\binom{j}{i}(-1)^i)_{0\le i,j \le n} = (-1)^{\binom{n+1}{2}} \prod_{i=0}^{n} \Gamma(i+1) = (-1)^{\binom{n+1}{2}} \det \text{Diag}(\Gamma(j+1)_{0\le j\le n}),$$ which implies that $(*)$ and $(**)$ coincide up to sign. $\blacksquare$

Related Question