I became aware of this question by way of an answer on Meta and feel I must push back against the comment of DonAntonio and the answer of amWhy.
Uniqueness of the solution of the system
$$
\begin{aligned}
x_1&=3\\
x_2&=2\\
x_3&=3
\end{aligned}
$$
is obvious and needs no proof. What is there to prove? Is it conceivable that if you plug in numbers other than $3,$ $2,$ and $3$ for $x_1,$ $x_2,$ and $x_3$ you might obtain three true statements?
The determinant is a complicated object, and by bringing it in in this situation, you are making something simple appear much more difficult than it actually is.
Here's what I think you were probably getting at: Let's start with a simpler analogue. Is the solution of the equation $x=2$ unique? Of course is is: $2$ is the only solution. Now $x=2$ may be the end result of simplifying a more complicated equation, such as $13x=26.$ The latter is a special case of the general equation $ax=b.$ It is certainly the case that the latter has a unique solution if and only if $a\ne0.$ If $a=0,$ then there is no solution unless $b=0,$ in which case there are infinitely many solutions.
Likewise, the matrix equation $Ax=b,$ where $A$ is a square matrix and $x$ and $b$ are column vectors, has a unique solution if and only if $\det A\ne0.$ If $\det A=0,$ then it has either no solution or infinitely many solutions.
So it is helpful to introduce the determinant to make statements about the nature of the solution set of the general equation $Ax=b.$ But for concrete $A$ and $b,$ it is usually more efficient to row reduce the system than to compute $\det A.$ (More precisely, computing $\det A$ is best done by actually performing row reduction, but there is no need to mention determinants if you are row reducing to solve a concrete problem.) The end result of the row-reduction process will tell you whether there is a unique solution or not.
The only thing that might need proof is that the three row operations (swapping rows, multiplying a row by a non-zero number, adding a multiple of one row to another row) preserve the solution set. That is generally proved in a linear algebra course, and you can probably assume it from that point on. If not, let $S$ be a system and let $S'$ be the system that results from applying a row operation. You just need to prove that any solution to $S$ is a solution to $S',$ and that any solution to $S',$ is a solution to $S.$ This is straightforward, but it seems like overkill to do it in every row reduction problem you perform.
$$
\begin{bmatrix}
1 & 1 & -1 & b_1 \\
2 & -1 & 3 & b_2 \\
-1 & 3 & 1 & b_3 \\
0 & 2 & -1 & b_4
\end{bmatrix}
\to
\begin{bmatrix}
1 & 1 & -1 & b_1 \\
0 & -3 & 5 & b_2 - 2 b_1 \\
0 & 4 & 0 & b_1+b_3 \\
0 & 2 & -1 & b_4
\end{bmatrix}
$$
Now continue row-reduction until you end up with the last row of 3 zeroes and some expression with $b_1, \ldots, b_4$ on the right-hand side. Call this expression $f$.
Note that if $f \neq 0$, the system has no solutions. However, if indeed $x=0$, then you can read off the correct values for $x_1,x_2,x_3$ from the reduced form of the matrix that gives your unique solution, provided also you don't divide by zero...
Best Answer
The system is over-determined for $n>2$ but here is a general method. We can write the augmented matrix for the system $A\begin{bmatrix}x\\y\end{bmatrix}=B$ as under:
$\begin{bmatrix}a_1&1&\Big|&b_1\\a_2&1&\Big|&b_2\\\vdots&\vdots&\Big|&\vdots\\a_n&1&\Big|&b_n\end{bmatrix}$
For a unique solution to exist, we should have $2$ linearly-independent equations to solve for the $2$ unknowns. In other words, the rank of the coefficient and augmented matrices should be $2$. Recall that no more than $2$ vectors in $\Bbb R^2$ can be linearly independent, so the rank of the coefficient matrix $A, \text{rank}(A)\le2$. For $\text{rank}(A)=2$, we need to ensure at-least two $a_i$ are distinct. Say we have distinct $a_1\ne0,a_2\ne a_1$.
The point of intersection of $a_1x+y=b_1,a_2x+y=b_2$ is given by $(X,Y)=\displaystyle\Big(\frac{b_1-b_2}{a_1-a_2},\frac{a_1b_2-a_2b_1}{a_1-a_2}\Big)$.
$\displaystyle R_i\to R_i-\frac{a_i}{a_1}\cdot R_1,\ i>1$
$\displaystyle R_j\to R_j-\frac{1-\frac{a_i}{a_1}}{1-\frac{a_2}{a_1}}\cdot R_2,\ j>2$
$\sim\begin{bmatrix}a_1&1&\Big|&b_1\\0&1-\frac{a_2}{a_1}&\Big|&b_2-\frac{a_2}{a_1}\cdot b_1\\0&0&\Big|&b'_3\\\vdots&\vdots&\Big|&\vdots\\0&0&\Big|&b'_n\end{bmatrix}$
$\displaystyle b'_i=b_i-\frac{a_i}{a_1}\cdot b_1-\frac{1-\frac{a_i}{a_1}}{1-\frac{a_2}{a_1}}\cdot\Big[b_2-\frac{a_2}{a_1}\cdot b_1\Big] \forall i>2$
For the rank of the augmented matrix to be $0$, we require $b'_i=0$
$\displaystyle\therefore b_i=\frac{(a_i-a_2)b_1+(a_1-a_i)b_2}{a_1-a_2}=\Big[\frac{b_1-b_2}{a_1-a_2}\Big]\cdot a_i+\Big(\frac{a_1b_2-a_2b_1}{a_1-a_2}\Big),\ \forall i>2$