Your particular example has a simple answer. The subspace whose basis
is the rows of your matrix is called a first-order Reed-Muller code
of length 16, and its dual (null space) is the second-order Reed-Muller
code of length 16. Denoting the rows by $1, x_1, x_2, x_3, x_4$ respectively,
the dual code has basis vectors that are
$$1, x_1, x_2, x_3, x_4, x_1x_2, x_1x_3,
x_1x_4, x_2x_3, x_2x_4, x_3x_4$$
where $x_ix_j$ is the element-by-element product of the row vectors,
e.g. $x_1x_2=0000000000001111$ and $x_2x_3=0000001100000011$.
More generally, the dual of the first-order Reed-Muller code
of length $2^m$ is the $(m-2)^{\text{th}}$-order Reed-Muller
code of length $2^m$, also known as the extended Hamming
code of length $2^m$, and the basis vectors can be taken as all
the monomials of degree $m-2$ or less.
More generally, in $\mathbb F_2^n$, given a $k\times n$ matrix of
row rank $k$, express it in row-echelon form, interchange columns
and rows as needed to express the matrix in the form
$[I_{k\times k}\quad P]$
where $P$ is a ${k\times(n-k)}$ matrix.
The null space is spanned by $[P^T\quad I_{(n-k)\times(n-k)}]$.
Now undo the column permutations to get the basis vectors
for the original problem. For vector spaces $\mathbb F_q^n$
where $q$ is not a power of $2$, use $-P^T$ instead of $P^T$.
You can’t really get the left null space directly from just the rref, but if you first augment the matrix with the appropriately-sized identity and then row-reduce it, the row vectors to the right of the zero rows of the rref constitute a basis for the left null space.
Using your example, row-reduce $$\left[\begin{array}{ccc|cc}1&2&4 & 1&0 \\ 2&4&8 & 0&1 \end{array}\right] \to \left[\begin{array}{ccc|rc} 1&2&4 & 1 &0 \\ 0&0&0 & -2&1 \end{array}\right].$$ The left null space is thus $\operatorname{span}\{(-2,1)\}$.
As for why this works, see this question. I’ll repeat a caveat from there: this method doesn’t often give you a “nice” basis, in that the vectors are often rather large multiples of what you would’ve computed by the more usual method of applying Gaussian elimination to the transpose.
Best Answer
If you can code up row-reduction by “hand”, then you should be able to use either of the methods described here to find a kernel basis.