Determinant of block matrix with commuting blocks + ring homomorphism

determinantmatricesring-theory

Let $A, B$ be commutative rings with unity and let $p$ be a positive integer. Suppose $\varphi:A\to M_p(B)$ is a ring homomorphism. Is it true that for any positive integer $q$ and any $M\in M_q(A)$
$$\det(\varphi(M)) = \det(\varphi(\det(M)))~?$$
where the leftmost $\det$ is the determinant on $M_{pq}(B)$, the middle one that on $M_{q}(B)$ and the rightmost one that on $M_{p}(A)$? (And we denote $\varphi$ the natural extension of $\varphi$ to a ring homomorphism $M_q(A)\to M_{pq}(B)$.)


This should be true (with only a few possible exceptions) when $A$ is a field. Indeed every matrix over a field is equivalent to a projection; it follows that $\varphi(M)$ invertible iff $M$ invertible iff $\det(\varphi(M))\neq 0$. Every invertible matrix $M$ can be written as $UD$ where $U\in\mathrm{SL}_p(A)$ and $D=\mathrm{Diag}(\theta_1,\dots,\theta_p)$. Thus when ever $\mathrm{SL}_p(A)=\mathrm{GL}_p(A)'$ holds (and there are only a few exceptions to this) we will get $\det(\varphi(M))=\det(\varphi(\det(M)))$.


I initially thought this was true for $A$ a PID since we can use elementary operations on lines to put $M$ into echelon form. But upon inspection I don't know how to deal with the images under $\varphi$ of the matrices used when improving the pivot in the Smith Normal Form algorithm.

To wit, we essentially would need to compare, for $\sigma,\tau, \gamma,\delta\in A$ with $\sigma\delta+\tau\gamma = 1_A$,
$$
1=
\det\Bigg(
\underbrace{
\varphi\left(
\det\left(
\begin{matrix}
\sigma&\tau\\
-\gamma&\delta
\end{matrix}
\right)
\right)
}_{=I_n\in M_n(B)}
\Bigg)
\overset?=
\det
\underbrace{
\left(
\begin{matrix}
\varphi(\sigma)&\varphi(\tau)\\
-\varphi(\gamma)&\varphi(\delta)
\end{matrix}
\right)}_{\in M_{2n}(B)}.$$


EDIT. Equivalently (and justifying the title) the question asks whether when $M_{ij}$, for $1\leq i,j\leq q$ are pairwise commuting matrices in $M_p(B)$ if $$\det(M)=\det\Big(\sum_{\sigma\in\mathfrak{S}_q}(-1)^\sigma\prod_{1}^qM_{i,\sigma(i)}\Big)$$
where $M$ is the $pq$-square matrix given in block form as $M=(M_{ij})_{i,j}\in M_{pq}(B)$.


Second EDIT. A Google search revealed a paper proving something almost identical to my question (and identical if we consider the second form from the previous EDIT): Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver, and Susan G. Williams. Their proof works over any commutative ring but asks for $A=B$.

Best Answer

I reproduce here the proof from the paper Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver, and Susan G. Williams.

Theorem. Let $M_{ij}$, $1\leq i,j\leq q$, be pairwise commuting matrices in $M_p(A)$ for some commutative ring $A$ and $B\subset M_p(A)$ the (commutative) subring generated by the $M_{ij}$, and $M=(M_{ij})_{ij}$ the associated block matrix. Then $$\det_{M_{pq}(A)}(M) = \det_{M_{p}(A)}(\det_{M_{q}(B)}(M))$$

The proof is by induction on $q$ and uses operations on lines. Viewing $M$ as a matrix in $M_q(B)$ with lines $L_1,\dots,L_q$ we perform the following operations: $$\begin{cases} L_2&\leftarrow\quad M_{11}L_2\\ L_3&\leftarrow\quad M_{11}L_3\\ &\ddots\\ L_q&\leftarrow\quad M_{11}L_q \end{cases} \qquad\textrm{followed by}\qquad \begin{cases} L_2&\leftarrow\quad L_2-M_{21}L_1\\ L_3&\leftarrow\quad L_3-M_{31}L_1\\ &\ddots\\ L_q&\leftarrow\quad L_q-M_{q1}L_1 \end{cases}$$ In matrix terms this is $$ \begin{pmatrix} 1\\ -M_{21}&1\\ -M_{31}&&1\\ &&&\ddots \\ -M_{q1}&&&&1 \end{pmatrix} \times \begin{pmatrix} 1\\&M_{11}\\&&M_{11}\\ &&&\ddots \\&&&&M_{11} \end{pmatrix} \times M = M' = \begin{pmatrix} M_{11}&*&\cdots&*\\ 0\\ \vdots&&N\\ 0 \end{pmatrix} $$ Next compute both $\displaystyle\det_{M_{pq}(A)}(\bullet)$ and $\displaystyle\det_{M_{p}(A)}(\det_{M_{q}(B)}(\bullet))$ on this product:

  • Computing $\displaystyle\det_{M_{pq}(A)}(\bullet)$: We find $$ \begin{array}{rcl} \displaystyle\det_{M_p(A)}\Big(M_{11}\Big)^{q-1}\cdot\det_{M_{pq}(A)}\Big(M\Big) & = & \displaystyle\det_{M_{pq}(A)}\Big(M'\Big)\\ & = & \displaystyle\det_{M_p(A)}\Big(M_{11}\Big)\cdot \color{orange}{\det_{M_{p(q-1)}(A)}\Big(N\Big)} \end{array}$$
  • Computing $\displaystyle\det_{M_{p}(A)}(\det_{M_{q}(B)}(\bullet))$: first compute in $M_q(B)$: $$ \begin{array}{rcl} \displaystyle\Big(M_{11}\Big)^{q-1}\cdot\det_{M_{q}(B)}\Big(M\Big) & = & \displaystyle\det_{M_{q}(B)}\Big(M'\Big)\\ & = & \displaystyle M_{11}\cdot\det_{M_{q-1}(B)}\Big(N\Big) \end{array}$$ Then apply $\displaystyle\det_{M_p(A)}$: $$ \begin{array}{rcl} \displaystyle\Big(\det_{M_p(A)} M_{11}\Big)^{q-1} \cdot \det_{M_p(A)}\Big(\det_{M_{q}(B)}\Big(M\Big)\Big) & = & \displaystyle\det_{M_p(A)}\Big(\det_{M_{q}(B)}\Big(M'\Big)\Big)\\ & = & \displaystyle\det_{M_p(A)}\Big(M_{11}\Big)\cdot \color{orange}{\det_{M_p(A)}\Big(\det_{M_{q-1}(B)}\Big(N\Big)\Big)} \end{array}$$

We now apply our induction hypothesis: the two orange terms are identical. Therefore $$\Big(\det_{M_p(A)} M_{11}\Big)^{q-1} \cdot\bigg[ \det_{M_{pq}(A)}\Big(M\Big) - \det_{M_p(A)}\Big(\det_{M_{q}(B)}\Big(M\Big)\Big) \bigg] = 0 $$ To deal with $\displaystyle\Big(\det_{M_p(A)} M_{11}\Big)$ which may very well be a zero divisor in $A$ the authors shift to working with matrices with coefficients in $A[T]$ and replace $M_{11}$ with $\widetilde{M_{11}}=T I_n + M_{11}$: then $\displaystyle\Big(\det_{M_p(A[T])} \widetilde{M_{11}}\Big)$ is a monic polynomial of degree $p$ and thus not a zero divisor in $A[T]$. Thus, defining $\widetilde{M}$ to have the same coefficients as $M$ except $\widetilde{M}_{11}=\widetilde{M_{11}}$ yields $$\Big(\det_{M_p(A[T])} \widetilde{M_{11}}\Big)^{q-1} \cdot\bigg[ \det_{M_{pq}(A[T])}\Big(\widetilde{M}\Big) - \det_{M_p(A[T])}\Big(\det_{M_{q}(B[T])}\Big(\widetilde{M}\Big)\Big) \bigg] = 0 $$ i.e. $$\det_{M_{pq}(A[T])}\Big(\widetilde{M}\Big) = \det_{M_p(A[T])}\Big(\det_{M_{q}(B[T])}\Big(\widetilde{M}\Big)\Big)$$ and evaluating at $T=0$: $$\det_{M_{pq}(A)}\Big(M\Big) = \det_{M_p(A)}\Big(\det_{M_{q}(B)}\Big(M\Big)\Big).$$

Related Question