How determinant = 0 proves that the equation system has either none or infinite solutions.

determinantlinear algebra

I'm trying to grasp the validity of $det(A)=0$ means the equations system , $A\vec{x}={y}$, has either infinite solutions or none, depending on $\vec{y}$.
In two or three dimensions the graphical visualisation of determinants, where each row is a transposed norm vector for a line or plane, makes sense. Now I am trying to get a more algebraic understanding of determinants to grasp 4×4 matrices and higher.

My thoughts so far:

In a 3×3 matrix $A=\begin{pmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{pmatrix}$ where $e_1=a_1x+b_1y+c_1z$ and similarly for $e_2,e_3$

If $$a_1(b_2c_3-b_3c_2)=0$$ and $a_1\ne 0$ then you know that $$b_3y+c_3z=k(b_2y+c_2z)$$
You can do this for all permutations $a_1,a_2,a_3$ to see if the y,z-factors of the other two equations are the same but for a scaling factor. But lets just stick with $a_1\ and\ e_2,e_3$ for simplicity.

If two equations are equal but for a scaling factor, then the equation system has either infinite or no solutions. So all that is left is to check whether $$a_3x+c_3z=k(a_2x+c_2z)\ or\ a_3x+b_3z=k(a_2x+b_2z)$$

If $$b_1(a_2c_3-a_3c_2)=0\ and\ c_1(a_2b_3-a_3b_2)=0,\ b_1\ne 0\lor c_1\ne 0$$
Then $$e_2=ke_3$$

So if $e_2=ke_3$, then
$$a_1(b_2c_3-b_3c_2)+ c_1(a_2b_3-a_3b_2) – b_1(a_2c_3-a_3c_2)=0$$
$$\Leftrightarrow a_1b_2c_3+a_2b_3c_1+a_3b_2c_3-(a_1b_3c_2+a_2b_1c_3a_3b_2c_1)=0$$
$$\Leftrightarrow det(A)=0$$

You can do this for $e_1=ke_2,\ e_2=ke_3,\ e_1=ke_3$ and always get the same expression as $det(A)$.

$$a_1(b_2c_3-b_3c_2)+ c_1(a_2b_3-a_3b_2) – b_1(a_2c_3-a_3c_2)=0\Leftrightarrow$$

$$-a_2(b_1c_3-b_3c_1)-c_2(a_1b_3-a_3b_1)+b_2(a_1c_3-a_3c_1)=0$$
which would mean that $e_1=ke_3$ if $a_2\ne 0 \land (c_2\ne 0\lor b_2\ne 0)$

Questions:

  1. Is this how you think of determinants of higher degrees?

  2. I can't quite get my mind around why the determinant work for any combination of two equations and am trying to grasp why the algebraic compliment changes sign when it does to keep det(A) valid for all combinations. The answer feels close, yet ephemeral to me right now.

  3. While my above explanation gets to why det(A)=0 if two equations are equal but for a scaling factor, it does not in my mind explain algebraically why det(A)=0 if one equation is a linear combination of more than one other.

Best Answer

As you probably know, you can think of an Matrix $A \in \mathbb R^{n \times n}$ as a linear Mapping $f : \mathbb R^n \to \mathbb R^n$ with $f(e_j) = A_j$, where $e_j = (0, \dots, 0,1,0, \dots, 0)$, having the $1$ at index $j$ and $A_j$ denotes the $j$-th column of $A$. Now the Determinant of $A$ can be seen as the (orientated) volume of the parallelotope given by the columns of $A$, i.e. the set

$$P = \{ \lambda_1 A_1 + \dots + \dots \lambda_n A_n \mid \lambda_i \ge 0, \lambda_1 + \dots + \lambda_n \le 1\}.$$

If the volume of P, i.e. the determinant of A, is zero, P lies in a lower dimensional subspace of $\mathbb R^n$. The smallest subspace of $\mathbb R^n$ containing P is called the image of $A$ and consists of all vectors $y$, for which the linear system can be solved. Since there are infinite vectors $u$ with $Au = 0$, there are infinite solutions to the linear system $Ax = y$ (you can simply add such $u$ to $x$ to get another solution).

If $y$ is not lying in the smallest subspace containing $P$, then the linear system can not be solved. Since in the case where the determinant of $A$ is zero, this subspace of dimension at most $n - 1$ (see above) there are such vectors $y$.

Related Question