I was looking at my professor's proof that the dimension of the image is given, following the rank and nullity theorem, by the difference of the dimension of the domain and the dimension of the nullspace. He proves it by showing that if you extend a basis of the kernel using $n-k$ vectors now that basis spans the image. He goes on by showing that those $n-k$ vectors are linearly independent and since the first $k$ vectors go to $0$ since they belong to the nullspace the remaining vectors span the image. I do not see clearly why those vectors should span the whole image rather than just a subspace of it: them being independent does not necessarily entail that they span the whole image right? How can it be shown that they span the entire image?
Rank and nullity theorem proof question
linear algebra
Related Solutions
Perhaps modifying your notation just a bit? $T: V \rightarrow W$ where $dim(V)=n$ and $dim(W)=m$ our goal is to prove that $dim(V) = dim(Null(T))+dim(range(T))$ where $dim(Null(T)) = r$ and $dim(range(T))= rank(T)=s$. To prove this dimension theorem we need to exhibit bases (yes, that's it) which serve to form minimal spanning sets for the null-space and range of $T$.
One approach, pick a basis for $V$, study the matrix for $T$ and steal this theorem from the corresponding theorem for rank and nullity of a matrix. That theorem comes from the nuts and bolts of Gaussian elimination. I don't think that is what your professor intends, so back to the linear algebraic argument.
Note $ker(T) \leq V$ hence $ker(T)$ is a vector space and as it is a subspace of a finite-dimensional vector space it has a finite dimension as well, let's say $r$. Moreover, following your notation, $\beta_o=\{ x_1, x_2, \dots , x_r \}$. I assume at this point you have already proved in your class that if a vector space has a basis with finitely many elements then any such basis has the same number of vectors. We call this number the dimension of the vector space(or subspace).
It is also a simple exercise to show $T(V) \leq W$ hence there exists a basis $\beta_z=\{ z_1,z_2, \dots z_s \}$ for the range. To prove the theorem, we must show $r+s=n$.
If $z_j \in T(V)$ then there exists $y_j \in V$ such that $T(y_j)=z_j$. We show $\{y_1,y_2, \dots, y_s\}$ is linearly independent by supposing otherwise towards a contradiction: suppose $c_1y_1+c_2y_2+ \cdots + c_sy_s=0$ with at least one $c_j \neq 0$ then since $T(0)=0$ and the image of a linear combination is the linear combination of the images: $$ c_1T(y_1)+c_2T(y_2)+ \cdots + c_sT(y_s)=0 $$ Hence $c_1z_1+c_2z_2 + \cdots + c_sz_s=0$ with at least one $c_j \neq 0$ hence $\beta_z$ is linearly dependent. This contradicts our assumption that $\beta_z$ serves as a basis for the image of $T$. Therefore, $\{y_1,y_2, \dots , y_s \} = T^{-1}(\beta_z)$ is a linearly independent subset of $V$.
At this point, I would like to claim $\beta=\beta_o \cup T^{-1}(\beta_z)=\{x_1,x_2, \dots , x_r \} \cup \{y_1,y_2, \dots , y_s \}$ forms a LI subset of $V$. Notice $\beta_o \cap T^{-1}(\beta_z) = \emptyset$ (can you prove this?) Suppose $$c_1x_1+\cdots + c_rx_r+b_1y_1+ \cdots b_sy_s =0. $$ We know $\beta$ contains nonzero vectors hence any nontrivial solution must stem from at least two nonzero coefficients in the above sum. If those two or more coefficients appear in the $c_j$-terms then that cannot happen by LI of $\beta_o$. Likewise, within the $b_j$-coefficients we cannot find a linear dependence by LI of $T^{-1}(\beta_z)$. The only remaining possibility is that both $c_j$ and $b_j$ combine to give linear dependence, however, this is impossible since it contradicts the fact that the bases $\beta_o$ and $T^{-1}(\beta_z)$ are disjoint.
Next, ignoring the fact you may have other theorems to use, we must show $\beta$ spans $V$. Let $v \in V$ and suppose $T(v)=w$. There exist $\alpha_1,\alpha_2, \dots , \alpha_s$ such that $w = \alpha_1z_1+ \cdots \alpha_sz_s$ since $\beta_z$ forms basis of $T(V)$. Furthermore, $T(v-\alpha_1y_1- \cdots -\alpha_sy_s) = T(v)-w=0$ hence $v-\alpha_1y_1- \cdots -\alpha_sy_s \in ker(T)$. Thus, there exist $\beta_1, \dots , \beta_r$ such that $v-\alpha_1y_1- \cdots -\alpha_sy_s = \beta_1x_1+ \cdots +\beta_rx_r$. Consequently, $$ v=\alpha_1y_1+ \cdots \alpha_sy_s +\beta_1x_1+ \cdots +\beta_rx_r $$ which shows $v \in span(\beta)$ and as $v$ was arbitrary we find $span(\beta)=V$.
Therefore, $\beta$ forms a basis for $V$ and so (by that other theorem I'm not proving here) it has $n$-elements. But, $\beta$ also clearly has $r+s$ elements by its construction. Hence $r+s=n$ and the proof is complete.
As you noticed, the basis elements $x$ and $x^2$ are mapped to the same subspace in the codomain: $$\text{Span}\left(\begin{bmatrix}-1 & 0 \\ 0 & 0\end{bmatrix}\right).$$ Another point of view is that the polynomial $x^2 - 3x$ is mapped to the zero matrix, and so it is a basis element of the null space of $T$.
Best Answer
In general you have that if $v_1,\dots,v_n$ spans $V$ and $T : V\to W$ is a linear map, then $Tv_1,\dots,Tv_n$ spans range $T$. To prove this, note that for $v\in V$ we have that $v=a_1v_1+\dots+a_nv_n$. Then applying $T$ to both sides gives $Tv=T(a_1v_1+\dots+a_nv_n)$. Using the linearity of $T$ we have that $Tv=a_1Tv_1+\dots+a_nTv_n$. This shows that every $Tv\in $ range $T$ can be written as a linear combination of $Tv_1,\dots,Tv_n$.
Now for rank nullity, let $v_1,\dots,v_m$ be a basis of null $T$. Extend this to a basis of $V$. Let $v_1,\dots,v_m,\dots,v_n$ be that extended basis of $V$. Then for $v\in V$ we have $v=a_1v_1+\dots+a_mv_m+\dots+a_nv_n$. Applying $T$ to both sides gives us $Tv=a_1Tv_1+\dots+a_mTv_m+\dots+a_nTv_n$. Because the vectors $v_1,\dots,v_m$ were in null $T$, we have that $Tv=a_{m+1}Tv_{m+1}+\dots+a_nTv_n$. To show that this list is linearly independent, let $c_{m+1},\dots,c_n$ be scalars such that $c_{m+1}Tv_{m+1}+\dots+c_nTv_n=0$. Then we have that $T(c_{m+1}v_{m+1}+\dots+c_nv_n)=0$. This implies that $c_{m+1}v_{m+1}+\dots+c_nv_n\in $ null $T$. Write this as a linear combination of the basis of null $T$. Then $c_{m+1}v_{m+1}+\dots+c_nv_n=b_1v_1+\dots,b_mv_m$. Subtracting from both sides we get $0=b_1v_1+\dots+b_mv_m-c_{m+1}v_{m+1}+\dots+c_nv_n$. Because the list $v_1,\dots, v_m,\dots,v_n$ is a basis of $V$, the list is linearly independent. This implies that $b_1=\dots=c_n=0$. Thus our original scalars $c_{m+1}= \dots =c_n=0$. Which shows that $Tv_{m+1},\dots,Tv_n$ is linearly independent and is a basis of range $T$. Then you get the formula $\dim V=\dim$ range $T$ + dim null $T$.