What explains this $n \times n$ determinant pattern

combinatorial-proofscombinatoricsdeterminantlinear algebra

$\mathbf{SETUP}$

I've been studying a physics model that involves inverting an $n \times n$ matrix of this form:

$$
\mathbf{A}^{-1}_n(x)=
\begin{pmatrix}
1 & -x & -x & -x & … \\
-x & 1 & -x & -x & … \\
-x & -x & 1 & -x & … \\
-x & -x & -x & 1 & … \\
… & … & … & … & …
\end{pmatrix}
^{-1}
$$

which made me interested in when it is not invertible, i.e. $\det(\mathbf{A}_n(x)) = 0$.

I managed to derive a nice reason for why:
$$
\det(\mathbf{A}_n(x=\tfrac{1}{n-1})) = 0
$$

which is because the determinant is equivalent to the polynomial:

$$
\det(\mathbf{A}_{n+1}) = (x+1)^n (1-nx)
$$

(Try it! It works 🙂 – Incidentally, would this be a common result in some textbook?)

$\mathbf{QUESTION}$

But then, I playfully decided to just make a single (non-diagonal) matrix entry equal to zero, keeping all the others still at $\tfrac{1}{n-1}$, and stumbled upon this other pattern:

$$
\det
\begin{pmatrix}
1 & 0 & \tfrac{1}{1-n} & \tfrac{1}{1-n} & … \\
\tfrac{1}{1-n} & 1 & \tfrac{1}{1-n} & \tfrac{1}{1-n} & … \\
\tfrac{1}{1-n} & \tfrac{1}{1-n} & 1 & \tfrac{1}{1-n} & … \\
\tfrac{1}{1-n} & \tfrac{1}{1-n} & \tfrac{1}{1-n} & 1 & … \\
… & … & … & … & …
\end{pmatrix}
=
\frac{n^{n-2}}{(n-1)^n}
$$

(The above works regardless of which non-diagonal entry you choose to make zero, of course.)

Can anyone help me understand why? Might this be related to MacMahon's Master Theorem? Or is there some more obvious way to think about this result?

Best Answer

Let $B$ be the $n\times n$ matrix (with $n>2$) whose entries are all ones except the entry at row $1$, column $2$ (as in the question), which is zero. Then it is easy to verify that $B^3=nB^2-B$ (see below). Since the eigenvalue $0$ of $B$ is obviously of multiplicity $n-2$ (it's easy to exhibit the eigenvectors), we get $$\det(\lambda I-B)=\lambda^{n-2}(\lambda^2-n\lambda+1).$$

The matrix in the question is $(nI-B)/(n-1)$, so we put $\lambda=n$ and get the expected result.


To verify $B^3=nB^2-B$, introduce the following $n\times n$ matrices:

  • $S$ — the "all ones" matrix: $S_{i,j}=1$;
  • $R$ — the matrix with the first row of ones: $S_{i,j}=[i=1]$;
  • $C$ — the matrix with the second column of ones: $S_{i,j}=[j=2]$;
  • $U$ — the matrix with the only nonzero entry at $(1,2)$: $S_{i,j}=[i=1\land j=2]$.

Now we build a multiplication table: \begin{align} SS &= nS & SR &= S & SC &= nC & SU &= C \\ RS &= nR & RR &= R & RC &= nU & RU &= U \\ CS &= S & CR &= 0 & CC &= C & CU &= 0 \\ US &= R & UR &= 0 & UC &= U & UU &= 0 \end{align} Since $B=S-U$, we get \begin{align} B^2&=SS-US-SU+UU \\ &=nS-R-C; \\ B^3&=(nS-R-C)(S-U) \\ &=nSS-RS-CS-nSU+RU+CU \\ &=n^2S-nR-S-nC+U \\ &=n(nS-R-C)-(S-U) \\ &=nB^2-B. \end{align}