I'm learning Linear Algebra using Gilbert Strang's lectures and at lecture 14 he said the following: "Imagine two perpendicular lines in R3. Can they be the row space and the nullspace? No.". So the answer is no but I don't understand why. Say we put 2 such vectors in a 3×2 matrix. The row space is a line, so the dimension of the row space is 1, so the rank is 1. The nullspace is also a line, so n – rank = 1. Therefore n is 2, which is right given the matrix is 3×2. I think my logic is flawed, since my answer is wrong. Can someone explain? Maybe I have to use the fact that the row space and nullspace are orthogonal?
Why can’t the row space and nullspace be two lines in R3
linear algebraorthogonalityvectors
Related Solutions
There is no point in trying to visualise this since the row space of $A$ and its null space are not naturally subspaces of the same space. If $A$ corresponds to a linear map $V\to W$, then the null space of $A$ corresponds to the kernel $\ker(f)$, a subspace of$~V$. Each row of $A$ computes one coordinate of the image of $f$, so it is a linear form on $V$ (a linear function $V\to\Bbb R$). The row space is defined by the linear combinations of these rows, so it is naturally a subspace of the space $V^*$ of linear forms on$~V$. This space $V^*$ not the same space as$~V$, even though it has the same dimension; it is called its dual vector space. The row space is then a subspace $R\subseteq V^*$, and the natural statement of what the cite passage is saying is that $\ker(f)$ is the set of vectors where all linear forms of$~R$ are zero simultaneously. This is quite unsurprising, since the null $\ker(f)$ is by definition the set where the linear forms corresponding to the rows of $A$ all are zero (the intersection of their zero sets), but once this happens for some $v\in V$, any linear combination of those linear forms is also zero at$~v$.
Now imagining combinations of linear forms on $V$ might be a bit hard, so one may just imagine that $V$ has an inner product for which the basis that one was using (to define the matrix $A$ of $f$) is orthonormal. This inner product is artificial and has no clear meaning, but at least it allows to represent each linear forms as the inner product with one specific vector$~v_i$. This representing of linear forms by vectors is what goes on in the operation of transposition. Then row$~i$ of $A$ corresponds to (the inner product with)$~v_i$, and the row space corresponds to the span of $v_1,\ldots,v_m$. Now the subspace where the linear form for row$~i$ of$~A$ is zero is the set of vectors orthogonal to$~v_i$. Then $\ker(f)$ is the set of vectors where this happens for all rows at once, i.e., the subspace of vectors orthogonal to $v_1,\ldots,v_m$, which is the orthogonal complement of the space of $v_1,\ldots,v_m$. This is a way to visualise, but remember that we are just saying that all linear forms in $R$ are zero simultaneously.
Let $$U= \operatorname{span}\{(1,0,0)\} \\ V = \operatorname{span}\{(0,1,1)\} \\ W=\operatorname{span}\{(0,1,0),(0,0,1)\}$$ Then clearly every vector in $U$ is orthogonal to every vector in $V$ and to every vector in $W$. Hence $U$ and $V$ are orthogonal subspaces and $U$ and $W$ are orthogonal subspaces.
But $U$ and $V$ together don't make up all of the vectors in $\Bbb R^3$ (the ambient space). For instance $(0,0,1)$ is not in either one. However $U$ and $W$ together do in fact make up all of the vectors in $\Bbb R^3$. In the lingo of linear algebra, we say that the direct sum of $U$ and $W$ is $\Bbb R^3$ -- or symbolically $U\oplus W = \Bbb R^3$.$^\dagger$ It's this extra property that makes $U$ and $W$ not just orthogonal subspaces -- but each other's orthogonal complement.
$\dagger$: I'm being a little loose with my language here. Technically the condition for $U\oplus W=\Bbb R^3$ (which is the required extra condition) is that every vector in $\Bbb R^3$ can be written in a unique way as a sum of an element of $U$ and of $W$. I.e. for every $v\in \Bbb R^3$, there exists a unique $u\in U$ and a unique $w\in W$ such that $u+w=v$. Nevertheless, the statements I give in the paragraph above are how we intuitively think of orthogonal complements.
Best Answer
This follows from the Rank-Nullity theorem. But specific to the example you reasoned in your question, when you formulated your 3 x 2 matrix (assuming you mean rows x columns), the Row space will be the vector sub-space spanned by the rows of your matrix viewed as vectors. The nullspace will be solutions of the equation $$Ax=0$$ Thus, your rows will be either 2 linearly independent vectors and 1 linearly dependent vector meaning that your row space has dimension 2 and your nullspace has dimension zero. Or, your rows will be 1 linearly independent vector (assuming you haven't chosen the zero matrix) and 2 linearly dependent vectors (multiples of that first row-vector), in which case your row space has dimension 1 and your nullspace has dimension 1. Keep in mind that in this example your vectors live in $\mathbb{R}^2$. Something similar happens when you consider and a 2 x 3 matrix whose row vectors live in $\mathbb{R}^3$. Can you produce an example matrix of this form and try to work it out?