I know that according to the Fundamental theorem of linear algebra the row space and the null space are orthogonal, but I don't really understand why. Could someone give an intuitive explanation of why this is with maybe some examples from $\mathbb{R}^2$ or $\mathbb{R}^3$ with the standard Euclidean inner product?
Linear Algebra – Orthogonality of Row Space and Null Space
linear algebra
Related Solutions
The null space of $A$ is the set of solutions to $A{\bf x}={\bf 0}$. To find this, you may take the augmented matrix $[A|0]$ and row reduce to an echelon form. Note that every entry in the rightmost column of this matrix will always be 0 in the row reduction steps. So, we may as well just row reduce $A$, and when finding solutions to $A{\bf x}={\bf 0}$, just keep in mind that the missing column is all 0's.
Suppose after doing this, you obtain $$ \left[\matrix{1&0&0&0&-1 \cr 0&0&1&1&0 \cr 0&0&0&0&0 \cr 0&0&0&0&0 \cr }\right] $$
Now, look at the columns that do not contain any of the leading row entries. These columns correspond to the free variables of the solution set to $A{\bf x}={\bf 0}$ Note that at this point, we know the dimension of the null space is 3, since there are three free variables. That the null space has dimension 3 (and thus the solution set to $A{\bf x}={\bf 0}$ has three free variables) could have also been obtained by knowing that the dimension of the column space is 2 from the rank-nullity theorem.
The "free columns" in question are 2,4, and 5. We may assign any value to their corresponding variable.
So, we set $x_2=a$, $x_4=b$, and $x_5=c$, where $a$, $b$, and $c$ are arbitrary.
Now solve for $x_1$ and $x_3$:
The second row tells us $x_3=-x_4=-b$ and the first row tells us $x_1=x_5=c$.
So, the general solution to $A{\bf x}={\bf 0}$ is $$ {\bf x}=\left[\matrix{c\cr a\cr -b\cr b\cr c}\right] $$
Let's pause for a second. We know:
1) The null space of $A$ consists of all vectors of the form $\bf x $ above.
2) The dimension of the null space is 3.
3) We need three independent vectors for our basis for the null space.
So what we can do is take $\bf x$ and split it up as follows:
$$\eqalign{ {\bf x}=\left[\matrix{c\cr a\cr -b\cr b\cr c}\right] &=\left[ \matrix{0\cr a\cr 0\cr 0\cr 0}\right]+ \left[\matrix{c\cr 0\cr 0\cr 0\cr c}\right]+ \left[\matrix{0\cr 0\cr -b\cr b\cr 0}\right]\cr &= a\left[ \matrix{0\cr1\cr0\cr 0\cr 0}\right]+ c\left[ \matrix{1\cr 0\cr 0\cr 0\cr 1}\right]+ b\left[ \matrix{0\cr 0\cr -1\cr 1\cr 0}\right]\cr } $$ Each of the column vectors above are in the null space of $A$. Moreover, they are independent. Thus, they form a basis.
I'm not sure that this answers your question. I did a bit of "hand waving" here. What I glossed over were the facts:
1)The columns of the echelon form of $A$ that do not contain leading row entries correspond to the "free variables" to $A{\bf x}={\bf 0}$. If the number of these columns is $r$, then the dimension of the null space is $r$ (again, if you know the dimension of the column space, you can see that the dimension of the null space must be the number of these columns from the rank-nullity theorem).
2) If you split up the general solution to $A{\bf x}={\bf 0}$ as done above, then these vectors will be independent (and span of course since you'll have $r$ of them).
The core of studying matrices is to study linear transformations between vector spaces. These can be realized as matrix multiplication on the left (or right) of column (or row) vectors.
If we are in this setup: $x\mapsto Ax$ for a column vector $x$ and appropriate matrix $A$, then the image of the linear transformation will be spanned by the columns of $A$.
The kernel of the transformation (nullspace) is the set of all $x$ such that $Ax=0$ is important for understanding the solutions to some matrix equations. You probably have already learned that if $x_0$ is a solution to $Ax=b$, then every other solution is given by $x_0+k$ where $k$ is in the nullspace.
This all has analogous explanation on the other side. If we are in this setup: $x\mapsto xA$ for a row vector $x$, then the image of the linear transformation is now spanned by the rows of $A$.
Talking about the nullspace of $A^T$ is just a fancy way of dressing up the "left nullspace" of $A$, since $xA=0$ iff $A^T x^T=0$. The nullspace is now the set of all $x$ such that $xA=0$, and you can draw the same conclusions about solutions to $xA=b$.
In short, these four spaces (really just two spaces, with a left and a right version of the pair) carry all the information about the image and kernel of the linear transformation that $A$ is affecting, whether you are using it on the right or on the left.
Best Answer
The row space is the set of $A^Tx$ for every vector $x$, the null space is the set of vectors $y$ such that $Ay=0$. The scalar product between a vector in the row space and a vector in the null space is $\langle y,A^Tx\rangle=y^T(A^Tx)=x^T(Ay)=x^T0=0$. The second equality follows from the fact that $y^TA^Tx$ has size $1\times 1$, hence is equal to its transpose $x^TAy$.