I don't know who is the original author of this proof, but I have seen it more than one time.
Idea:
The general idea is that, for any sequence of the form $$(0,0,\ldots,0,1,1,\ldots,1,0,0,\ldots,0),$$ the sequence of differences is $$(0,0,\ldots,0,1,0,0,\ldots,0,-1,0,0,\ldots,0),$$ that is, it contains at most two entries, one $1$ and one $-1$. Hence, we can manipulate any submatrix to make it simpler and calculate its determinant.
Proof:
Let's assume that the matrix has row-oriented consecutive ones property (for column-oriented version work with transposed matrix). Let $A$ be any square submatrix; of course, it also has the consecutive ones property.
Define matrix $B$ by
$$
b_{r,c} =
\begin{cases}
a_{r,c} - a_{r,c+1} & \text{ for }c+1 \leq \mathrm{columns}(A), \\
a_{r,c} & \text{ otherwise}.
\end{cases}
$$
By C1P, the matrix $B$ has in each row at most two entries. There are three cases:
- If some row has no non-zero entries, the determinant is zero.
- If some row has one non-zero entry, we could perform Laplace expansion along that row and consider further the only minor whose coefficient is non-zero.
- After all the expansions, all the rows left (if there are any) has exactly two non-zero entries, one $1$ and one $-1$. Call this matrix $B'$ and observe that $$B' \left[\begin{array}{c}1\\1\\\vdots\\1\end{array}\right] = \left[\begin{array}{c}0\\0\\\vdots\\0\end{array}\right],$$ in other words, $B'$ is singular and its determinant is zero.
Hence, the determinant of $B$ is in $\{-1,0,1\}$, and by its definition, the determinant of $A$ is also in $\{-1,0,1\}$. However, $A$ was any square submatrix, therefore, we have proved the total unimodularity of the original matrix.
I hope this helps $\ddot\smile$
Relative to the bases that you chose to use, the matrix that you computed is in fact correct. Generally speaking, however, when specific bases aren’t mentioned the standard basis is implied. Indeed, if you apply a change of basis to the matrix that you computed, you get the correct answer: $$\begin{bmatrix}2&0\\0&-1\end{bmatrix} \begin{bmatrix}0&-1\\1&0\end{bmatrix}^{-1} = \begin{bmatrix}0&2\\1&0\end{bmatrix}.$$ (No change-of-basis matrix is needed on the left side because you already used the standard basis for the codomain.)
You could have constructed this matrix directly by taking advantage of the fact that its columns are the images of the basis vectors: we know that $(0,1)^T\mapsto(2,0)^T$ and, by linearity, $(1,0)^T\mapsto(0,1)^T$, so those are the second and first columns, respectively, of the transformation matrix (with respect to the standard basis).
Best Answer
The difference-of-sums condition is applied separately for each column $j$. In other words, it should be written as $$\sum_{i\in Q_1}a_{ij}-\sum_{i\in Q_2}a_{ij} \in \lbrace 0,\pm1 \rbrace\quad \forall j.$$ If you set $Q=\lbrace 3,4\rbrace$, then $Q_1 = \lbrace 3 \rbrace$ and $Q_2 = \lbrace 4 \rbrace$ (or vice versa). Using the former, your differences are $0,0,1,0,1.$