I don't quite follow the first paragraph of your question. But reading the rest of your post, if you only need a set $S$ of $N$ $m$-dimensional vectors over the finite field of some small order in which any subset $S' \subset S$ of cardinality $m$ is a set of linearly independent vectors, that's a parity-check matrix of an MDS code over $\mathbb{F}_q$.
Take an $m \times N$ parity-check matrix $H$ of an $[N, N-m, m+1]_q$ code (i.e., a $q$-nary linear code of length $N$, dimension $N-m$, and minimum distance $m+1$). Then whichever $m$ or fewer columns you pick from $H$, they're always linearly independent.
The binary case is no good because the only MDS codes are the trivial ones. For larger $q$, there are known nontrivial MDS codes. Reed-Solomon codes are good examples. They're cyclic codes so you can realize the codewords as an ideal of the polynomial ring $\mathbb{F}_q[x]/(x^N-1)$; you can generate them through multiplication between a certain monic polynomial that divides $x^N-1$ and all polynomials of degree less than $N-m$ (including $0$). Maybe this is systematic enough to work for your purpose?
In any case, you might want to clarify your question a bit or, if MDS codes are indeed what you would like to construct, pick your favorite coding theory textbook and read up on them.
The answer is yes. In fact, an even stronger claim is true: there exists some $N$ such that for all $n \geq N, \ P_{1}^{n}, \dots, P_{k}^n$ are linearly independent over $\mathbb{C}$.
For this we will use a generalization of the Mason-Stother's theorem which appears on the Wikipedia page (though I have taken the special case of the curve $C = \mathbb{P}^{1} (\mathbb{C})$ and written it in slightly different language.):
Let $q_1, \dots, q_{k}$ be polynomials such that $q_1 + \cdots + q_{k} = 0$ and every proper subset of $q_1, \dots, q_{k}$ is linearly independent. Then,
$$\max \left\{ \mathrm{deg} \left( q_1 \right), \dots, \mathrm{deg} \left( q_{k} \right) \right\} \leq \frac{(k - 1)(k - 2)}{2} \left( \mathrm{deg} \left( \mathrm{rad} \left( q_1 \cdots q_{k} \right) \right) - 1\right)$$
Now, we can prove the claim by induction on $k$. For $k = 2$ it is obvious. Now, by induction for all $n$ large enough every proper subset of $P_{1}^{n}, \dots, P_{k}^{n}$ is linearly independent. Suppose for contradiction that there exist constants $\lambda_{1}, \dots, \lambda_{k}$ such that
$$\lambda_1 P_{1}^n + \cdots + \lambda_{k} P_{k}^n = 0$$
Letting $q_i = \lambda_i P_{i}^{n}$, notice that $q_1, \dots, q_k$ satisfy the requirements of the lemma (we have assumed that $\lambda_i \neq 0$), and therefore
$$n \leq \max \left\{ \mathrm{deg} \left( q_1 \right), \dots, \mathrm{deg} \left( q_{k} \right) \right\} \leq \frac{(k - 1)(k - 2)}{2} \left( \mathrm{deg} \left( \mathrm{rad} \left( q_1 \cdots q_{k} \right) \right) - 1\right) = \frac{(k - 1)(k - 2)}{2} \left( \mathrm{deg} \left( \mathrm{rad} \left( P_1 \cdots P_k \right) \right) - 1 \right)$$
but the right hand side is constant, and so for $n$ large we get a contradiction.
Best Answer
By nearly binary do you mean most entries are $0$ or $1$ but a few are not? I will assume that all entries are $0,1.$ My main comments involve merely noting zero and non-zero.
To be sure of having maximum rank you will need to look at each column. You can start with any independent set (such as the set consisting of a single column) and then examine the rest of the columns in some order. If the next column is independent of the current set adjoin it as a new member otherwise discard it. This will result in a maximum size set (a basis of the column space). Depending on how sparse it is, it could be pretty easy to decide when the next column can be added, at least for a while.
Here are a few simple ideas to boost efficiency:
If the matrix is very sparse then use efficient sparse matrix algorithms and representations, these might involve storing it as a set of ordered tuples $(i,j)$ or $(i,j,a_{ij})$ showing the location (and value) of the non-negative entries. There is bound to be highly efficent and well validated existing code which will provide you with a basis for the column space of a (sparse) matrix.
Let the depth of a column be the first row with a non-zero entry. Sort the columns according to depth (don't bother to break ties by looking further.) So far this might not require you to look at all the entries, just those down to the first non-zero entry. If the matrix is quite sparse then one might expect many different heights.
Pick one column from each depth class, This will give you a starting independent set which might include a relatively large number of columns. Assume that this has been done.
Go through the current set of independent columns, look for any rows which are all zero (for the selected column vectors) Now look through the undiscarded columns and try to find one which is non-zero in that position. If one is found then add it to the set. This will enlarge the set of columns.