Here is a slightly more long-range answer: a matrix corresponds to a linear operator $T:V\to W$ where $V$ and $W$ are vector spaces with some chosen bases. The elementary row operations (or elementary column operations) then correspond to changing the basis of $W$ or of $V$ to give an equivalent matrix: one which represents the same linear operator but with the bases changed around. Under this correspondence you can get all the possible matrices corresponding to the linear operator $T$ by doing elementary row and column operations.
Adding or removing a row of zeroes will not give you a matrix corresponding to the linear operator $T$.
The key idea in using row operations to evaluate the determinant of a matrix is the fact that a triangular matrix (one with all zeros below the main diagonal) has a determinant equal to the product of the numbers on the main diagonal. Therefore one would like to use row operations to 'reduce' the matrix to triangular form.
However, the effect of using the three row operations on a determinant are a bit different than when they are used to reduce a system of linear equations.
(1) Swapping two rows changes the sign of the determinant
(2) When dividing a row by a constant, the constant becomes a factor written in front of the determinant.
(3) Adding a multiple of one row to another does not change the value of the determinant.
Let's apply these operations to your matrix to find its determinant.
First we want to produce two zeros in rows $2$ and $3$ of column $1$. (Remember our goal is to produce all zeros below the main diagonal, and we do this one column at a time beginning with column $1$.)
The two row operations $-R_1+R_2\to R_2$ and $-4R_1+R_3\to R_3$ will accomplish this goal, and will not change the value of the determinant.
\begin{eqnarray}
\begin{vmatrix}1 & 7 & -3\\ 1 & 3 & 1 &\\ 4 & 8 & 1 \end{vmatrix} &=&
\begin{vmatrix} 1 & 7 & -3\\0 & -4 & 4\\0 & -20 & 13\end{vmatrix}
\end{eqnarray}
Now all that remains is to obtain a $0$ in row $3$ column $2$. We see that adding $-5$ times row $2$ to row $3$ will accomplish this. That is, $-5R_2+R_3\to R_3$.
\begin{eqnarray}
\begin{vmatrix}1 & 7 & -3\\ 1 & 3 & 1 &\\ 4 & 8 & 1 \end{vmatrix} &=&
\begin{vmatrix} 1 & 7 & -3\\0 & -4 & 4\\0 & -20 & 13\end{vmatrix}\\
&=& \begin{vmatrix}
1 & 7 & -3\\0 & -4 & 4\\0 & 0 & -7
\end{vmatrix}
\end{eqnarray}
Since we only had to use the third row operation, the one which does not change the value of the determinant and since we now have a triangular matrix, we find the determinant by multiplying the numbers on the main diagonal.
\begin{eqnarray}
\begin{vmatrix}1 & 7 & -3\\ 1 & 3 & 1 &\\ 4 & 8 & 1 \end{vmatrix} &=&
\begin{vmatrix} 1 & 7 & -3\\0 & -4 & 4\\0 & -20 & 13\end{vmatrix}\\
&=& \begin{vmatrix}
1 & 7 & -3\\0 & -4 & 4\\0 & 0 & -7
\end{vmatrix}\\
&=&28
\end{eqnarray}
Best Answer
In your example, if you interchange two first colums you'll get $$ \left[\matrix{ x_1 & x_2 & x_3 \\ 2 & 1 & 3 \\ 4 & 4 & 2 \\ 1 & 1 & 4}\right]\quad\sim\quad \left[\matrix{ x_2 & x_1 & x_3 \\ 1 & 2 & 3 \\ 4 & 4 & 2 \\ 1 & 1 & 4}\right]. $$ What happened in the top most row is that the roles of $x_1$ and $x_2$ have been interchanged. When you get a solution to the modified system as $(2,-2,4)$, you must not forget to switch the variables back to the original ones.
Similar thing happens if you take, say, the first column minus the second column $$ \left[\matrix{ x_1 & x_2 & x_3 \\ 2 & 1 & 3 \\ 4 & 4 & 2 \\ 1 & 1 & 4}\right]\quad\sim\quad \left[\matrix{ \tilde x_1 & \tilde x_2 & \tilde x_3 \\ 1 & 1 & 3 \\ 0 & 4 & 2 \\ 0 & 1 & 4}\right]. $$ To see the relation between the old and the new variables, we write $$ A_1x_1+A_2x_2+A_3x_3=(A_1-A_2)\tilde x_1+A_2\tilde x_2+A_3\tilde x_3= A_1\tilde x_1+A_2(\tilde x_2-\tilde x_1)+A_3\tilde x_3. $$ hence, $x_1=\tilde x_1$, $x_2=\tilde x_2-\tilde x_1$ and $x_3=\tilde x_3$, i.e. your variables has changed again. It means that
In general, elementary row operations are equivalent to left multiplication by an elementary matrix transformation $$ Ax=b\quad\implies\quad \underbrace{LA}_{\tilde A}x=\underbrace{Lb}_{\tilde b}, $$ while column operation corresponds to right multiplication by an elementary matrix. Because this multiplication has to be done for $A$ and matrix multiplication is not commuting, we have to compensate this action by the inverse matrix $$ Ax=b\quad\implies\quad \underbrace{AR}_{\tilde A}\underbrace{R^{-1}x}_{\tilde x}=b. $$