Proof rank linear map is equal to the rank of its transformation matrix

linear algebralinear-transformationsmatricesmatrix-ranksolution-verification

$\DeclareMathOperator{rk}{rk} \DeclareMathOperator{im}{im}$
I want to prove, that the rank of a linear map $f \colon V \to W$ is equal to the rank of the transformation matrix A of this linear map.

Let $v$ a Basis of $V$ with length $n$ and $w$ a basis of $W$ with length m and let $A$ be the $(m \times n)$ transformation matrix of the linear map f in relation to $v$ and $w$ with $\rk(A)=r$.

Now, let $P$ be an invertible $(n \times n)$ matrix, Q an invertible $(m \times m)$ matrix. It follows, that $v\cdot P$ is a new Basis of $V$ and $w \cdot Q$ is a new basis of $W$. The new transformation matrix $B$ of the linear map $f \colon V \to W$ in relation to the new bases $v\cdot P$ and $w \cdot Q$ is now given by $B=Q^{-1} \cdot A \cdot P$.

Since
\begin{align*}
\im(A \cdot P) &= \{x \in \mathbb{R}^n \mid (A \cdot P)x = y \text{ has a solution}\} \\ &= \{x \in \mathbb{R}^n \mid A \cdot x = y \text{ has a solution}\}\\ &= \im(A)
\end{align*}

and $\im(Q^{-1})=\mathbb{R}^m$ holds
\begin{align*}
\im(Q^{-1} \cdot A \cdot P) &= \im(Q^{-1}\cdot A) \\ &= \{ x \in \mathbb{R}^m \mid (Q^{-1}\cdot A)x=y \text{ has a solution} \}\\&=\{y \in \mathbb{R}^m \mid Q^{-1}\cdot x = y \text{ has a solution}, x \in \im(A)\}\\ &=\im(A)
\end{align*}

fallows immediately, since $\im(Q^{-1})=\mathbb{R}^m$. Therefore $$\dim (\im(f))=\rk(f):=r=\rk(A)=\dim(\im(A))=\dim(\im(B))=\rk(B)$$ which is therefore not determined by the choice of the bases of $V$ and $W$.

Best Answer

$\DeclareMathOperator{\im}{im}$ $\DeclareMathOperator{\rk}{rk}$

Your proof is not correct. First observe that $\im(A)$ can be written simply as $$\im(A) = \{ A \cdot x \mid x \in \mathbb R^n\} \subset \mathbb R^m .$$ You correctly show that $\im(A \cdot P) = \im(A)$. However, you have $$\im(Q^{-1} \cdot A \cdot P) = \{ Q^{-1} \cdot ((A \cdot P)\cdot x) = (Q^{-1} \cdot A \cdot P)\cdot x \mid x \in \mathbb R^n \} = \{ Q^{-1} \cdot z \mid z \in \im(A \cdot P) = \im(A) \} ,$$ but in general the latter differs from $\im(A)$. Anyway, it is irrelvant, you only have to compare dimensions.

For an $m\times n$-matrix $A$ let $l_A : \mathbb R^n \to \mathbb R^m$ be the linear map given by $l_A(x) = A \cdot x$. You know that $\rk(A) = \dim(\im(A)) = \dim (\im(l_A))$.

Given a linear map $f : V \to W$ and bases $v =\{v_1,\dots,v_n\}$ of $V$ and $w =\{w_1,\dots,w_m\}$ of $W$, you can form the transformation matrix $A$ of $f$ with respect to $v, w$. Let $\phi_v : V \to \mathbb R^n$ be the linear isomorphism determined by $\phi_v(v_j) = e_j$, where the $e_j$ are the standard basis vectors of $\mathbb R^n$, simalarly $\phi_w : W \to \mathbb R^m$. Then by definition of $A$ we get $\phi_w \circ f \circ (\phi_v)^{-1} = l_A$. This immediately implies $\dim(\im(f)) = \dim(\im(l_A)) = \rk(A)$ because the dimension of linear subspaces is preserved under linear isomorphisms.

Edited on request:

You know that the matrix $A$ is constructed as follows. Since $w$ is a basis of $W$, for each $v_j \in v$ there exists a unique represention $$f(v_j) = \sum_{i=1}^m a_{ij}w_i $$ with $a_{ij} \in \mathbb R$. Then we have $A = (a_{ij})$. What is the purpose of this matrix? Using the above isomorphisms $\phi_v, \phi_w$, we get $$(*) \quad A \cdot \phi_v(x) = l_A(\phi_v(x)) = \phi_w(f(x)) ,$$ i.e. we can reduce $f$ to matrix multiplication.

To verify $(*)$ it suffices to consider $x = v_j$. We get $$A \cdot \phi_v(v_j) = A \cdot e_j = (a_{1j},\dots,a_{mj})^{T} = \sum_{i=1}^m a_{ij}e_i$$ and $$\phi_w(f(v_j)) = \phi_w(\sum_{i=1}^m a_{ij}w_i) = \sum_{i=1}^m a_{ij}\phi_w(w_i) = \sum_{i=1}^m a_{ij}e_i .$$ Here it is essential that $\phi_v(v_j) = e_j$ and $\phi_w(w_i) = e_i$.

Related Question