Let's consider the matrix:
$$A = \begin{bmatrix}a & b & c\\d & e & f\\g & h & i\end{bmatrix}$$
The cofactors along the first row are:
$$C_{1,1} = \begin{vmatrix}e & f\\h & i\end{vmatrix}$$
$$C_{1,2} = -\begin{vmatrix}d & f\\g & i\end{vmatrix}$$
$$C_{1,3} = \begin{vmatrix}d & e\\g & h\end{vmatrix}$$
And we have that:
$$\det(A) = aC_{1,1} + bC_{1,2} + cC_{1,3}$$
But now consider the equation:
$$dC_{1,1} + eC_{1,2} + fC_{1,3}$$
That would be the cofactor expansion of the matrix:
$$B = \begin{bmatrix}d & e & f\\d & e & f\\g & h & i\end{bmatrix}$$
Since that matrix has the same cofactors along the first row as $A$.
But since $B$ has two identical rows, we know that its determinant is zero, so:
$$\det(B) = dC_{1,1} + eC_{1,2} + fC_{1,3} = 0$$
I think the key idea is that, since the cofactors computed along a row (or column) do not use the values of that row (or column), then replacing that row (or column) does not change the cofactors computed along it.
That is why the linear combination
$$\sum_{j=1}^{n} a_{k,j} C_{i,j}$$
is the same as the determinant of matrix $A$ with row $i$ replaced with row $k$.
EDIT adding additional details.
The question was about why it is true that:
"If elements of a row (or column) are multiplied with cofactors of any other row (or column), then their sum is zero."
To show why that is true, I first chose a row for the cofactors (in the example above, row 1), then a different row to multiply them by (in the example above, row 2).
Then I showed that the product is the same as the determinant of a different matrix (matrix $B$) in which row 1 is replaced by row 2.
So $B$ was not some random matrix. It was a matrix determined by our choices of
- the row (or column) along which to compute the cofactors, which I'll call $i$, and
- the other row (or column) which we want to multiply those cofactors by, which I'll call $j$
Once we have chosen those two rows (or columns), we can see that the product of the cofactors for row (or column) $i$ by row (or column) $j$ is the same as the cofactor expansion of a different matrix, in which row (or column) $i$ is replace by row (or column) $j$, so that it has two copies of that row (or column): one in row (or column) $i$ and another in row (or column) $j$.
Hence, that matrix has a determinant of zero, so the equation is equal to zero.
Best Answer
No, we don’t need any $a_{ij}$ with $i>r$ to evaluate $A_{k1},A_{k2},\ldots,A_{kr}$ and $A_{ks}$.
To illustrate, suppose $r=2$. Pick any $k>r$ and $s>r$. The $(r+1)$-rowed minor $D$ in our case becomes $$ D=\left|\begin{matrix}a_{11}&a_{12}&a_{1s}\\ a_{21}&a_{22}&a_{2s}\\ a_{k1}&a_{k2}&a_{ks}\end{matrix}\right|. $$ To evaluate its value, Shilov expanded $D$ along the last row, so that the following $r$-rowed minors are used: $$ A_{k1}=\left|\begin{matrix}a_{12}&a_{1s}\\ a_{22}&a_{2s}\end{matrix}\right|, \ A_{k2}=\left|\begin{matrix}a_{11}&a_{1s}\\ a_{21}&a_{2s}\end{matrix}\right|, \ A_{ks}=\left|\begin{matrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{matrix}\right|. $$ The “elements $a_{ij}$ with $i\le r$” refer to those elements that appear in the $2\times2$ submatrices above. As you can see, these elements are taken from the first $r$ ($=2$ in our example) rows of $A$. That’s why their row indices $i$ are $\le r$. Since the row indices of $a_{12},a_{1s},a_{22}$ and $a_{2s}$ do not involve $k$, the value of $A_{k1}$ does not really depend on $k$, and likewise for $A_{k2}$ and $A_{ks}$. In other words, for each column index $j\in\{1,2,s\}$ we have $A_{r+1,j}=A_{r+2,j}=\cdots=A_{n,j}$ and this common value is denoted by $c_j$ in the book.