Linear Algebra – Discovering a Unique Matrix Property

linear algebramatrices

I encountered the following weird matrix property.

Consider any general matrix $M_{n\times n}$ with the property that the sum of each column vanishes, that is

\begin{align} \sum^n_{j} M_{ji} =0 \end{align}

Denoting

  • $M_{(1)}$ : the matrix obtained from $M$ by removing the first column and row,
  • $M_{(2)}$ : the matrix obtained from $M$ by removing the second column and row,
  • $M_{(1,2)}$ : the matrix obtained from $M$ by removing the first and second column and row,
  • $u_{(k)}=(1,\dots,1)$ : the $(k)$-row vector with all elements being $1$,
  • $C_{(n-1)\times (n-2)}=\begin{pmatrix} 0 \dots 0 \\ 1_{(n-2)\times (n-2)}
    \end{pmatrix} $
    , where $ 1_{(n-2)\times (n-2)}$ is the identity matrix

Define $p_1$ and $p_2$ as
$$ p_1 = u_{(n-1)} \cdot\big(M_{(1)}\big)^{-1}\cdot\begin{pmatrix} 1 \\ 0 \\ \vdots\\0
\end{pmatrix}_{ (n-1)\times1}, \quad p_2 = u_{(n-1)} \cdot\big(M_{(2)}\big)^{-1}\cdot\begin{pmatrix} 1 \\ 0 \\ \vdots\\0
\end{pmatrix}_{ (n-1)\times1}. $$

Prove that all the elements of the row vector $$ u_{(n-1)} \cdot \left(p_2\big(M_{(1)}\big)^{-1}+p_1\big(M_{(2)}\big)^{-1}\right)\cdot C – (p_1+p_2)u_{(n-2)}\cdot \big(M_{(1,2)}\big)^{-1} $$ are identical.

This property comes from some intuition of the problem that I have been playing with. I have also tested it by evaluating it with a large set of matrix $M$ satisfying the first requirement.

(I thank user1551 for spotting an important typo, corrected now!)

I have tried writing the inverse using minors but does not seem to help as it is not easy to implement the requirement that $\sum_{j} M_{ji}=0$.
Any comment/suggestion is greatly appreciated. Answers are of course the best! Thank you so much!

Best Answer

I show below that the "identical elements" are all equal to $p_1p_2$ (which is confirmed by the example in the now-deleted answer). Let us put

$$ D=\bar{M}_{(2)}^{-1}=(d_{ij})_{1\leq i,j \leq n-1}, E=\bar{M}_{(1)}^{-1}=(e_{ij})_{1\leq i,j \leq n-1}, F=\bar{M}_{(1,2)}^{-1}=(f_{ij})_{2\leq i,j \leq n-1}. \tag{1} $$ (notice the ranges of indices. The convention I choose is perhaps not the most logical but I find it the most convenient).

Then both $D$ and $E$ have the property that their inverse has $F^{-1}$ in its lower rightmost corner. Using the Schur complement formula, we deduce that $D$ and $E$ are of the form

$$ \begin{array}{lcl} D&=&\left( \begin{array}{c|c} d & R_D \\ \hline C_D & \frac{1}{d}C_DR_D+F \end{array} \right),\\ E&=&\left( \begin{array}{c|c} e & R_E \\ \hline C_E & \frac{1}{e}C_ER_E+F \end{array} \right) \end{array} \tag{2} $$

And we also have closed forms for their inverses :

$$ \begin{array}{lcl} D^{-1} &=& \left( \begin{array}{c|c} \frac{1}{d}(1+R_DF^{-1}C_D) & -\frac{1}{d}R_DF^{-1} \\ \hline -\frac{1}{d}F^{-1}C_D & F^{-1} \end{array} \right), \\ E^{-1} &=& \left( \begin{array}{c|c} \frac{1}{e}(1+R_EF^{-1}C_E) & -\frac{1}{e}R_EF^{-1} \\ \hline -\frac{1}{e}F^{-1}C_E & F^{-1} \end{array} \right) \end{array} \tag{3} $$

We can then rewrite the initial matrix $M$ :

$$ M=\left( \begin{array}{c|c|c} \frac{1}{d}(1+R_DF^{-1}C_D) & m_{12} & -\frac{1}{d}R_DF^{-1} \\ \hline m_{21} & \frac{1}{e}(1+R_EF^{-1}C_E) & -\frac{1}{e}R_EF^{-1} \\ \hline -\frac{1}{d}F^{-1}C_D & -\frac{1}{e}F^{-1}C_E & F^{-1} \end{array} \right) \tag{4} $$

We can now interpret the hypothesis than the columns of $M$ have zero sum. The first two columns are not interesting since $m_{12}$ and $m_{21}$ can be arbitrary. But the other columns tell us that $(-\frac{1}{d}R_D-\frac{1}{e}R_E+u_{(n-2)})F^{-1}=0$ ; and since $F^{-1}$ is invertible, $\frac{1}{d}R_D+\frac{1}{e}R_E =u_{n-2}$, or

$$ \frac{d_{1,j}}{d}+\frac{e_{1,j}}{e} = 1 \ \ (2 \leq j\leq n)\tag{5} $$

We deduce from (2) that

$$ p_1=e+s_E, p_2=d+s_D \tag{6} $$ where $s_E$ (or $s_D$) denotes the sum of all numbers in column $C_E$ ($C_D$). and

$$ (p_1\bar{M}_{(2)}^{-1}+p_2\bar{M}_{(1)}^{-1})C= p_1\left( \begin{array}{c} R_D \\ \hline \frac{1}{d}C_DR_D+F \end{array} \right)+ p_2\left( \begin{array}{c} R_E \\ \hline \frac{1}{e}C_ER_E+F \end{array} \right) \tag{7} $$

So the row vector $A=u_{(n-1)}(p_1\bar{M}_{(2)}^{-1}+p_2\bar{M}_{(1)}^{-1})C$ can be written $A=(a_2,\ldots,a_{n})$ with

$$ a_j=p_1\bigg(d_{1,j}+\frac{d_{1,j}}{d}s_D+\sum_{k=2}^{n}F_{k,j}\bigg) +p_2\bigg(e_{1,j}+\frac{e_{1,j}}{e}s_E+\sum_{k=2}^{n}F_{k,j}\bigg) \tag{8} $$

Also, the row vector $B=(p_1+p_2)u_{(n-2)}\cdot \big(\bar{M}_{(1,2)}\big)^{-1}= (p_1+p_2)u_{(n-2)}F$ can be written $B=(b_2,\ldots,b_{n})$ with

$$ b_j=(p_1+p_2)\sum_{k=2}^{n} F_{kj} \tag{9} $$

Next, if the put $G=A-B=(g_2,\ldots,g_{n})$ we have

\begin{align} g_j &= a_j-b_j \\[6pt] &= p_1\bigg(d_{1,j}+\frac{d_{1,j}}{d}s_D\bigg) +p_2\bigg(e_{1,j}+\frac{e_{1,j}}{e}s_E\bigg) \\[6pt] &= \frac{d_{1,j}}{d} \bigg(d+s_D\bigg)p_1 +\frac{e_{1,j}}{e}\bigg(e+s_E\bigg)p_2 \\[6pt] &= \bigg(\frac{d_{1,j}}{d}+\frac{e_{1,j}}{e}\bigg) p_1p_2 \ \textrm{by} \ (6)\\[6pt] &= p_1p_2 \ \textrm{by} \ (5) \end{align}

So $g_j$ is independent of $j$ as needed, which finishes the proof.

Related Question