I want to prove that the product of two permutation matrices is itself a permutation matrix. But I don't know how. Please help!
[Math] Product of permutation matrices
matricespermutations
Related Solutions
I'm going to use a slightly different approach, just because I have an aversion to $\sum_k a_{ik}b_{kj}$ type formulas!
Note: I realized about halfway through that I write permutation matrices "backwards;" with nice rows and messy columns. I did rewrite to match what I suspect is your convention, but if you see any funniness that seems to clash with the rest of the notation, that's probably the blame. For that, and other reasons, I'd advise you to not follow the details too closely; try and pick up the main idea that it's much nicer to work with rows/columns of matrices: Get the "individual entry stuff" out of the way at the beginning, to understand the rows/columns of the given matrices, then use rows/columns from that point on.
For my approach, remember that to multiply matrices $A$ and $B$, we take the dot product of row $R_i$ from matrix $A$, and column $C_j$ of matrix $B$, to get the entry $(AB)_{ij}$.
Now, the columns of permutation matrix $M_\pi$ are pretty straightforward; column $j$ is just $e_{\pi(j)}$, the $\pi(j)$th basis vector with a $1$ in position $\pi(j)$ and $0$ everywhere else.
The rows are a bit less straightforward. If we look at $M_\pi$, where is the $1$ in row $i$? Turning around the previous example, we saw $1$ in entry $(M_\pi)_{\pi(j)j}$; that is, in row $\pi(j)$ and column $j$. This suggests that, to find the $1$ in row $i$, we need to look in column $\pi^{-1}(i)$.
- Note: Write out a small example to get a better feel for this; something like the matrix corresponding to permutation $(1\ 3\ 2)$ helped me. Also, it's suggestive to apply $\pi^{-1}$ to the indices of the previous matrix entries $(M_\pi)_{\pi(j)j}$ to get information about row $j$, rather than row $\pi(j)$.
Thus, the $i$th row of matrix $M_\pi$ is the transpose $e^t_{\pi^{-1}i}$ of the $\pi^{-1}(i)$th basis vector, where I've dropped parentheses in $\pi^{-1}(i)$ for readability, and will continue to do so for indices.
Now, write $M_\sigma$ as a column of rows $e^t_{\sigma^{-1}i}$ and $M_\tau$ as a row of columns $e_{\tau j}$. We compute
$$M_\sigma M_\tau = \begin{pmatrix}e^t_{\sigma^{-1} 1} \\ e^t_{\sigma^{-1} 2} \\ \vdots \\ e^t_{\sigma^{-1} n}\end{pmatrix} \begin{pmatrix}e_{\tau 1} & e_{\tau 2} & \cdots & e_{\tau n}\end{pmatrix},$$
where the entry at position $(i, j)$ is just the dot product $e_{\sigma^{-1} i} \cdot e_{\tau j}.$ Breaking into cases, we have
$$e_{\sigma^{-1} i} \cdot e_{\tau j} = \begin{cases} 1, & \sigma^{-1}(i) = \tau (j) \\ 0&\rm{otherwise} \end{cases}.$$
Investigating $\sigma^{-1}(i) = \tau (j)$ a bit more, apply $\sigma$ to both sides to get $i = \sigma(\tau(j)) = (\sigma \circ \tau)(j)$. This is indeed the permutation matrix $M_{\sigma \circ \tau}$!
Here's a proof that a regular bipartite graph has a perfect matching: Show that a finite regular bipartite graph has a perfect matching.
Your matrix corresponds to an $m$-regular bipartite graph in which the rows and the columns form the two vertex classes and the ones determine the edges. A perfect matching is a matching in which every vertex is incident to exactly one edge of the matching. This corresponds to a permutation matrix, and subtracting out this matrix leaves $m-1$ ones in each row and column.
Best Answer
Method 1 - Using Matrices:
Suppose you have two permutation matrices $X$ and $Y$. So by definition both $X$ and $Y$ are $n\times n$ matrices having precisely one non-zero entry in each row and column, and these entries all take the value 1.
If the non-zero entry in the first row of $X$ is in the $i$-th column, then what will the first row of the matrix $X\cdot Y$ be? By definition of matrix multiplication it will be
$ \left(\sum_{j=1}^{n}X_{1,j}Y_{j,1}, \sum_{j=1}^{n}X_{1,j}Y_{j,2},\ldots,\sum_{j=1}^{n}X_{1,j}Y_{j,n} \right) $
However, we know that $X_{1,j}=\delta_{i,j}$ (where $\delta_{i,j}$ is the Kronecker delta taking the value $1$ when $i=j$ and $0$ otherwise). Hence the first row of $X\cdot Y$ will be
$ \left(X_{1,i}Y_{i,1}, X_{1,i}Y_{i,2},\ldots,X_{1,i}Y_{i,n} \right) $
which since $X_{1,i}=1$ will be
$ \left(Y_{i,1}, Y_{i,2},\ldots,Y_{i,n} \right). $
So the first row of $X\cdot Y$ corresponds to the $i$-th row of $Y$. Continuing in this way, you see that the $m$-th row of $X\cdot Y$ will be the $l$-th row of $Y$, where the $1$ in the $m$-th row of $X$ appears in the $l$-th position (I hope that makes sense!)
Thus when you multiply two permutation matrices together, you are just rearranging the rows of one of the matrices.
Method 2 - Using Permutations:
A slightly easier approach is to use permutations. Suppose $X$ and $Y$ are $n\times n$ permutation matrices as above. Then define $\sigma_{X}:\{1,\ldots, n\}\rightarrow\{1,\ldots,n\}$ as follows:
$\sigma_{X}(i)=j$ where the $1$ in the $i$-th column of $X$ occurs in the $j$-th position. Similarly you may define $\sigma_{Y}$.
Then you can check that $\sigma:X\mapsto\sigma_{X}$ respects multiplication (in a similar way as above).
Method 3 - Using Vectors:
You can also show this using vectors (see joriki's comment above for more details).
Note: All of these methods are essentially the same, but are just different ways of viewing the problem.