I would look at this not as a system of equations but rather a unknown linear map with transformation matrix $X$ such that $X^Ta^T=b^T$.
First we construct a basis of the domain $B_1=\{u_1,u_2,\ldots,u_{n-1},a^T\}$ and the image $B_1=\{v_1,v_2,\ldots,v_{n-1},b^T\}$. It doesnt matter how it looks apart from $a^T$ and $b^T$ being basis vectors. (Obviously we need $a\neq0$ and $b\neq0$, see remark below)
Then we can simply calculate the transformation matrix by the standard procedure of elementary column operations to obtain the identity matrix.
$$
\begin{bmatrix}
u_1 & u_2 & \ldots u_{n-1} & a^T \\
\hline
v_1 & v_2 & \ldots v_{n-1} & b^T
\end{bmatrix}\to
\begin{bmatrix}
I_n\\\hline
X^T
\end{bmatrix}
$$
For your special example this would read:
$$
\begin{bmatrix}
0 & 0 & 1 \\
0 & 1 & 2 \\
1 & 0 & 0 \\
\hline
0 & 0 & 5 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{bmatrix}\to
\begin{bmatrix}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\hline
0 & 0 & 5 \\
0 & 1 & -2 \\
1 & 0 & 0 \\
\end{bmatrix}\to
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\hline
5 & 0 & 0 \\
-2 & 1 & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
$$
which yields $$X=\begin{bmatrix}
5 & -2 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \end{bmatrix}$$
Remark 1: Obviously you will get a different result for another choice of your basis.
Remark 2: If $a=0$ xor $b=0$ there exists no invertible $X$. If $a=b=0$ any invertible matrix will to the trick.
Remark 3: You can omit the transpositions and calculate with row vectors and elementary row operations, that was just my personal preference...
Remark 4: Probably the easiest way is to take the canonical basis $e_1,\ldots,e_n$ and replace one of the vectors $e_k$ by $a^T$ like I did in the example. It will always work as long as $e_k\cdot a^T\neq0$.
To simplify, suppose that the system is$$\left\{\begin{array}{l}a_{11}x+a_{12}y+a_{13}z=0\\a_{21}x+a_{22}y+a_{23}z=0.\end{array}\right.\tag1$$Then either the matrix $A=\left[\begin{smallmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{smallmatrix}\right]$ is invertible or it isn't. If it is, you take any number $z$, and you take $x,y\in K$ such that$$(x,y)=A^{-1}.(-a_{13}z,-a_{23}z).$$Then $A.(x,y)=(-a_{13}z,-a_{23}z)$; in other words, $(1)$ holds. Of course, if $a_{13}=a_{23}=0$, this will provide you only one solution but, in that case, $(0,0,z)$ is a solution for every $z\in K$.
And if $A$ is not invertible, let $v=(\alpha,\beta)\in\ker A\setminus\{(0,0)\}$. Then, for each $\lambda\in K$, $(\lambda\alpha,\lambda\beta,0)$ is a solution of $(1)$.
If you are still looking for a reference, then that's the LEMMA ON LINEAR EQUATIONS from Nathan Jacobson's Basic Algebra I, 2nd edition, chap. 4, § 5, p. 237.
Best Answer
I'm assuming you're referring to linear equations.
Although the linear system of equations $Ax = b$ might not have a solution when the system is overdetermined, you can always find a least-squares solution
$$\min_x \|Ax-b\|^2.$$
If the minimum value is $0$, a minimizer $x$ is a solution to the system. Otherwise, $x$ is the "closest possible" solution, in the sense of minimizing the residual error, to a system that has no solution.
To find a minimizer $x$, you take a derivative and set it equal to 0:
$$A^TAx -A^Tb = 0.$$
The matrix $A^TA$ might be singular, but $A^Tb$ always lies in its column space so this system always has a solution. You can find one such solution by calculating $x = A^{+}b$, where $A^{+}$ is the Moore-Penrose pseudoinverse of $A$.
So to find a solution to your overdetermined system, one approach is to compute(*) the pseudoinverse $A^{+}$, then calculate $x = A^{+}b$. You can check if $Ax=b$ to see if your overdetermined system did in fact have a solution.
(*): My favorite method, in terms of robustness and efficiency, for computing the pseudoinverse is to use the QR decomposition of $A$. The details are beyond the scope of this answer, but worth looking up if you're interested.