If you know the determinant is multilinear (linear in each row/linear in each column), then express the the determinant as a sum of determinants of matrices that have repeated rows/columns. Then show that the determinant of a matrix with two identical rows/columns is always zero.
If you do not know the determinant is multilinear in the rows/columns, and you know the definition in terms of sums of products over all permutations, then the same argument works to show the determinant is a sum of determinants of matrices with repeated rows/columns.
If your only definition of determinant is inductively (by minors), then prove it by induction, expanding along a row or column that is not a linear combination of other rows/columns.
Edit: You've edited your question, and now the question is the precise converse of what you originally posted. This is a bit of bad form, since any answers already posted would automatically look completely wrong-headed. In the future, consider editing to add the new question to the old, indicating this is an edit or addition.
So, now you want to show that if the determinant is zero, then a row/column is a linear combination of other rows/columns. Looking at the tranpose if necessary it suffices to consider the case of rows. Note that applying elementary row operations does not change whether the determinant is zero or nonzero (an elementary row operation will either multiply the determinant by a nonzero constant, or will not change the determinant). So the determinant of the matrix is zero if and only if the determinant of its row-echelon form is zero; the determinant of the row-echelon form is zero if and only if there is a row of zeros. Tracing back the elementary row operations will express the original row as a linar combination of the other rows.
I commend to you the Hungarian method. It's an algorithm that I'm not up to writing out here, but it's available in many combinatorics, discrete math, and graph theory textbooks, also many places on the web: Wikipedia in particular will get you started.
Best Answer
Suppose $A$ is your $m \times n$ matrix. From what I've gathered from our comments, what you want to be doing is projecting the vector $j \in \Bbb{R}^m$ containing only $1$s onto the rowspace of $A$. In this way, you'll obtain a linear combination that is as close to $j$ as possible, in terms of the Euclidean $2$-norm.
I suggest considering $(A^\top)^+ = (A^+)^\top$, the Moore-Penrose pseudoinverse of $A^\top$. Essentially, we want a least squares approximation of the matrix equation $$A^\top x = j$$ where $x$ is an arbitrary $m \times 1$ column vector, consisting of coefficients in the linear combination of rows (also, $j$ is considered a column vector). Then, if $w$ is an arbitrary $m \times 1$ column vector, choosing $$x = (A^\top)^+j + (I - (A^\top)^+A^\top)w$$ will produce an optimal linear combination. In particular, we can choose $w = 0$, and simply use $$x = (A^\top)^+j.$$ If the rows of $A$ are linearly independent, then the pseudoinverse is straight forward to compute: $$(A^\top)^+ = (AA^\top)^{-1}A.$$ Thus, you may want to consider removing rows that are linearly dependent on the others.