There are some odd minor glitches in this question, which make it impossible to literally answer as asked. First of all, the asker wants $S$ to be in $GL_n(\mathbb{Z})$. But with integer coefficients, having full rank doesn't have the desired consequence. For example, $\left[ \begin{smallmatrix} 1&1 \\ 1&-1 \end{smallmatrix} \right]$ has full rank, but its determinant is even, so we can't make it into the identity by right multiplying it by an integer matrix.
Secondly, the asker asks for the identity matrix to end up in the first $n$ rows.
But consider the full rank matrix $\left[ \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right]$. It's first row is $0$, so we can't make it into $1$.
Here are two corrected versions which both have simple answers:
Question Let $Q = \left[ \begin{smallmatrix} A \\ B \end{smallmatrix} \right]$ be a matrix so that we know there is an $S$ with $QS = \left[ \begin{smallmatrix} \mathrm{Id} \\ E \end{smallmatrix} \right]$. How do we find $S$ and $E$?
Answer Put $S=A^{-1}$ and $E = B A^{-1}$.
Or, alternatively:
Question Let $Q$ be a matrix over some field of full rank. How can we find a matrix $S$ such that $QS$ looks as simple as possible?
Answer Use column reduction (the transpose of row reduction). The reduced
column echelon form of $Q$ will be of the form $QS$ and will have an identity matrix in some set of rows, but not necessarily the top rows.
Computer algebra systems usually produce row reductions, not column reductions, but that's easy to deal with. For example, in Mathematica, define
columnReduce[M_]:=Transpose[RowReduce[Transpose[M]]]
I believe the analogous MatLab code is
function transpose(rref(transpose(M))) = cref(M)
Yes, this always works, and the reason is the following lemma:
Lemma. Let $K$ be a field and $V$ a vector space over $K$. Given linearly independent vectors $v_1,\dots,v_k\in V$ and any $w\in V$, we have
$$
\text{$v_1,\dots,v_k,w$ are linearly independent} \quad\Longleftrightarrow\quad w\notin\operatorname{span}(v_1,\dots,v_k).
$$
In your situation $v_1,\dots,v_k$ are the rows of your $m\times n$ matrix. Since $m<n$, the rows don't span all of $\mathbb R^n$ so there must be at least one of the standard basis vectors not in the row space. By the above lemma you can add that vector as a row to obtain a new matrix with one more row and still linearly independent rows.
This can be repeated until you end up with a square matrix.
Best Answer
Full rank means that the columns of the matrix are independent; i.e., no column can be written as a combination of the others. When you multiply a matrix by a vector (right), you are actually taking a combination of the columns, if you can find at least one vector such that the multiplication gives the 0 vector, then the columns are dependent and the matrix is not full rank.