[Math] What makes a linear system of equations “unsolvable”

linear algebrasystems of equations

I've been studying simple systems of equations, so I came up with this example off the top of my head:
\begin{cases}
x + y + z = 1 \\[4px]
x + y + 2z = 3 \\[4px]
x + y + 3z = -1
\end{cases}

Combining the first two equations yields
\begin{gather}
z = 2 \\ x + y = -1.
\end{gather}

But substituting $z = 2$ in the third equation implies $$x + y = -7,$$ while substituting $x + y = -1$ in the third equation implies $$z = 0.$$

I notice that $x + y$ appears in all three equations, so if we define $w = x + y$ then this essentially becomes three equations in two variables, which explains why I could solve for a variable using only the first two equations, and why the third equation doesn't agree.

So here is my question:
What is the distinguishing feature of systems of equations that determines whether or not they have a solution?
Perhaps put another way: in general, is there a "check" one can do on a system (aside from actually trying to solve it) to determine if there will be a solution?

Edit:
Thanks for all the input everyone. I'm satisfied knowing that in general, there is no way to simply inspect such a linear system to determine if it has a solution – rather, some work is required.
The geometric interpretations of linear systems of equations given in several answers were very helpful. Specifically: the interpretation of trying to find a vector $x$ in $Ax = b$ so that that the matrix $A$ represents a linear transformation mapping the vector $x$ to the vector $b$ (which may not be possible if $A$ maps $n$-dimensions to $n-1$ dimensions).

Best Answer

That's one of the main reasons why linear algebra was invented!

First we translate the problem into matrices: if $$ \mathbf{A}=\begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 2 \\ 1 & 1 & 3 \end{bmatrix} \qquad \mathbf{x}=\begin{bmatrix} x \\ y \\ z \end{bmatrix} \qquad \mathbf{b}=\begin{bmatrix} 1 \\ 3 \\ -1 \end{bmatrix} $$ then the system can be rewritten as $\mathbf{A}\mathbf{x}=\mathbf{b}$. This is not really a great simplification, but allows using the unknowns as a “single object”.

A big advance is obtained by interpreting this in terms of linear maps. The matrix $\mathbf{A}$ induces a linear map $f_{\mathbf{A}}\colon\mathbb{R}^3\to\mathbb{R}^3$ defined by $$ f_{\mathbf{A}}(\mathbf{v})=\mathbf{A}\mathbf{v} $$ and now solvability of the linear system becomes the question

does the vector $\mathbf{b}$ belong to the image of $f_{\mathbf{A}}$?

The image $\operatorname{Im}(f_{\mathbf{A}})$ is a vector subspace of $\mathbb{R}^3$; if it has dimension $3$, then clearly the system is solvable. But what if the dimension is less than $3$?

This is the “obstruction” for the solvability: when the dimension of the image (the rank of the linear map and of the matrix $\mathbf{A}$) is less than the dimension of the codomain (in your case $3$) the system can be solvable or not, depending on whether $\mathbf{b}$ belongs to the image or not.

There is no “general answer” that allows just looking at $\mathbf{A}$ and $\mathbf{b}$ and tell whether the system is solvable. Rather, there are efficient techniques that show whether the system has a solution without actually solving it. A very good one is doing elementary row operations, because these correspond to multiplying both sides of the system by an invertible matrix. In the present case, we do \begin{align} \left[\begin{array}{ccc|c} 1 & 1 & 1 & 1 \\ 1 & 1 & 2 & 3\\ 1 & 1 & 3 & -1 \end{array}\right] &\to \left[\begin{array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 2\\ 0 & 0 & 2 & -2 \end{array}\right] &&\begin{aligned} R_2&\gets R_2-R_1 \\ R_3&\gets R_3-R_1 \end{aligned} \\&\to \left[\begin{array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 2\\ 0 & 0 & 0 & -6 \end{array}\right] &&R_3\gets R_3-2R_2 \end{align} At this stage we know that the system is not solvable. We also know that the rank of $\mathbf{A}$ is $2$ and even that the image is spanned by the vectors $$ \begin{bmatrix}1\\1\\1\end{bmatrix} \qquad \begin{bmatrix}1\\2\\3\end{bmatrix} $$ This is easy for the present situation, but the method can be applied to systems of any size, not necessarily with as many equations as unknowns.

The same row elimination shows that if the vector $\mathbf{b}$ had been \begin{bmatrix} 1 \\ 3 \\ 5 \end{bmatrix} then the system would be solvable.

Seen in a different way, the system is solvable if and only if $$ \mathbf{b}=\alpha\begin{bmatrix}1\\1\\1\end{bmatrix} +\beta\begin{bmatrix}1\\2\\3\end{bmatrix} $$ for some $\alpha$ and $\beta$.