Proof of $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$ in Hoffman’s Linear Algebra

determinantlinear algebramatricespermutationsproof-explanation

From the point of view of products of permutations, the basic property of the sign of a permutation is that $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$. In other words, $\sigma\cdot \tau$ is an even permutation if $\sigma$ and $\tau$ are either both even or both odd, while $\sigma \cdot \tau$ is odd if one of the two permutations is odd and the other is even. One can see this from the definition of the sign in terms of successive interchanges of pairs $(i, j)$.

In last sentence, Hoffman’s claim we can prove $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$ from definition of sgn in terms of successive interchanges of pairs, i.e. $\text{sgn}:S_n \to K$ such that $\text{sgn}(\sigma)=1_K$, if $(\sigma (1),…,\sigma (n))$ is obtained from $(1,2,…,n)$ by even number of interchanges of pairs, and $\text{sgn}(\sigma)=-1_K$, if $(\sigma (1),…,\sigma (n))$ is obtained from $(1,2,…,n)$ by odd number of interchanges of pairs. Que: I don’t really see if $(\sigma \circ \tau(1),…,\sigma \circ \tau(n))$ is obtained from $(1,…,n)$ by even or odd number of interchanges of pair. IMO, composition makes things bit complicated. There must be some intermediate argument to convince why $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$ is true using definition of sgn in terms of interchanges of pairs.

Hoffman’s then showed $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$ from elementary property of determinant.

Let $\sigma ,\tau \in S_n$. Let $A=(e_{\tau (1)},…,e_{\tau (n)})$ and $B=(e_{\sigma (1)},…,e_{\sigma (n)})$. Then $\text{det}(A)=\sum_{\mu\in S_n}\text{sgn}(\mu)\prod_{i=1}^nA(i,\mu (i))$. If $\mu \neq \tau$, then $\exists j\in J_n$ such that $\mu (j)\neq \tau (j)$. So $A(j,\mu (j))=0$. Thus $\text{sgn}(\mu)\prod_{i=1}^nA(i,\mu (i))=0$, $\forall \mu \in S_n \setminus\{\tau\}$. Hence $\text{det}(A)=\text{sgn}(\tau)\prod_{i=1}^nA(i,\tau (i))=\text{sgn}(\tau)$. Similarly, $\text{det}(B)=\text{sgn}(\sigma)$. It’s easy to see $AB=(e_{\sigma \cdot \tau(1)},…,e_{\sigma \cdot \tau (n)})$. So $\text{det}(AB)=\text{sgn}(\sigma \cdot \tau)$. Since $\text{det}(AB)= \text{det}(A)\cdot \text{det}(B)$, we have $\text{sgn}(\sigma \cdot \tau)=\text{sgn}(\sigma) \cdot \text{sgn}(\tau)$.

Que: How to rigioursly show $AB=(e_{\sigma \cdot \tau(1)},…,e_{\sigma \cdot \tau (n)})$? I known $AB=(e_{\tau(1)},…,e_{\tau (n)})\cdot B=(e_{\tau (1)}\cdot B,…,e_{\tau (n)}\cdot B)$, by this post.

Edit: $P_\sigma =B^t$ and $P\tau =A^t$. It’s easy to check, if $R,S\in M_{n\times n}(K)$, then $(R\cdot S)^t=S^t\cdot R^t$. So $P_{\sigma \tau}= P_{\sigma}\cdot P_{\tau}=B^t\cdot A^t=(A\cdot B)^t$. Thus $A\cdot B=(P_{\sigma \tau})^t=(e_{\sigma \cdot \tau(1)},…,e_{\sigma \cdot \tau (n)})$.

Best Answer

Actually, if we define $\mathrm{sgn}$ to be $\pm1$ according to the parity of the number of transpositions used to create the permutation, the hard part is showing the sign is even well-defined to begin with (since the number of transpositions is not unique), after which showing it is a homomorphism is easy.

This is because if $\sigma=\sigma_1\cdots\sigma_k$ is a product of $k$ transpositions and $\tau=\tau_1\cdots\tau_\ell$ a product of $\ell$ transpositions, then $\sigma\tau=\sigma_1\cdots\sigma_k\tau_1\cdots\tau_\ell$ is a product of $k+\ell$ transpositions, so the homomorphism property $\mathrm{sgn}(\sigma\tau)=\mathrm{sgn}(\sigma)\mathrm{sgn}(\tau)$ simply reads $(-1)^{k+\ell}=(-1)^k(-1)^\ell$.

Let $P_\sigma=(e_{\sigma(1)}~\cdots~e_{\sigma(n)})$ be the permutation matrix associated to a permutation $\sigma$. Then $P_\sigma$ is characterized by the property $P_\sigma e_j=e_{\sigma(j)}$, by definition of matrix multiplication. This means we can simply read off $P_\sigma P_\tau e_j=P_\sigma e_{\tau(j)}=e_{\sigma(\tau(j))}$ to conclude $P_\sigma P_\tau=P_{\sigma\tau}$.

Or, we can define $P_\sigma=[\delta_{i\sigma(j)}]$ using the Kronecker delta symbol, then calculate $$P_\sigma P_\tau=[\delta_{i\sigma(j)}][\delta_{j\tau(k)}]=\Big[\sum_j \delta_{i\sigma(j)}\delta_{j\tau(k)}\Big]=[\delta_{i\sigma(\tau(k))}]=P_{\sigma\tau}.$$

Note after matrix multiplication we can erase all zero summands (i.e. all but when $j=\tau(k)$).

Related Question