Show, that $C$ invertable $\iff C^T$ invertible $\iff \operatorname{rank}_ {col}(C) =n \iff \operatorname{rank}_ {row}(C) =n $

linear algebramatricesproof-verification

EDIT: I indeed forgot to add, that $C\in M(n\times n;\mathbb{K})$

Show, that $i)\;C$ invertible $\iff ii)\;C^T$ invertible $\iff iii)\;\operatorname{rank}_ {col}(C) =n \iff iv)\;\operatorname{rank}_ {row}(C) =n $

$i)\implies ii)$

If $C$ is invertible, then there is $C^{-1}$. If there is an inverse matrix, then no zero row may occur after the Gaussian elimination, i.e. all line vectors are linearly independent. $C^T$ invertible means that $(C^T)^{-1}=(C^{-1})^T$ applies. This again would be statement i), so there exists an inverse matrix $C^{-1}$.

$ii)\implies iii)$

If there is an inverse matrix, then no zero line may occur after the Gaussian elimination, i.e. all line vectors are linearly independent. Immediately follows, that $\operatorname {rank}=n \implies$ the column rank and row rank is $n$, because column and row rank are always the same.

$iii)\implies iv)$

Because column rank = row rank for any matrix, the row rank must be $n$ too.

$iv)\implies i)$

Because the row rank of C is $n$, all $n$ row are linearly independent, so there would be no zero line after the Gaussian elimination. This is the prerequisite for $C$ being invertible. That's why $i)$ follows from it.

Can you help me? I'm really unsure, if this proof is correct. I might mixed up column and row rank, I guess.

Best Answer

It's tricky to speak definitively here, since we are missing some context, but I think there are some issues here. Specifically, you seem to be too reliant on citing results that say, more or less directly, what it is you're supposed to be proving. For example,

$iii) \implies iv)$

Because column rank = row rank for any matrix, the row rank must be n too.

is exactly what you're being asked to prove when showing iii) $\iff$ iv).

Normally, using a result given previously in a course is absolutely fine; results are there to be used! However, it's a bit trickier when the result is directly one of the questions being asked. Arguably, it might just be that the step from iii) to iv) is supposed to be the really simple step, where you just quote a single result from the course, but I don't think so, and the reason for this is (unfortunately) purely based on experience.

Here's some low-lying fruit. Definitely, i) and ii) are equivalent. It's not hard to use Arthur's method to prove $C^T$ is invertible if and only if $C$ is invertible. If $C$ is invertible, then $CC^{-1} = C^{-1} C = I$. Taking transposes, this yields, $$(C^{-1})^T C^T = C^T (C^{-1})^T = I^T = I.$$ Note that $(C^{-1})^T$ fits the definition of the inverse of $C^T$. Specifically, the inverse of $C^T$ is a matrix $B$ such that $$B C^T = C^T B = I.$$ Such a matrix is necessarily unique if one exists at all, and when one does, we call it the inverse of $C^T$ and denote it $(C^T)^{-1}$. As we have proven, we can substitute $B = (C^{-1})^T$ and maintain equality. So, $(C^T)^{-1}$ is equal to, by definition, $(C^{-1})^T$.

In particular, this means $C^T$ is invertible, only from the assumption that $C$ is invertible. By switching the roles of $C$ and $C^T$, we also get the reverse implication: if $C^T$ is invertible, then $(C^T)^T = C$ is invertible too. Thus, i) $\iff$ ii).

More low-lying fruit: the columnspace of $C$ is the rowspace of $C^T$, and vice-versa. So, the column rank of $C$ is the row rank of $C^T$, and vice-versa. So, if we can show that $C$ being invertible is logically equivalent to $C$ having full column rank, then the following are now equivalent:

  • $C$ is invertible
  • $C^T$ is invertible
  • $C$ has full column rank
  • $C^T$ has full row rank.

But further, exchanging the roles of $C$ and $C^T$ again, we get further equivalences!

  • $C^T$ is invertible
  • $C$ is invertible
  • $C^T$ has full column rank
  • $C$ has full row rank.

This would complete the full set of equivalences: i), ii), iii), iv). So, we now just need to establish i) $\iff$ iv).

To me, this is where you might want to cite a theorem. We can proceed directly too. Let's say that $x_1, \ldots, x_n$ form the column vectors of $C$. Then, it's a simple consequence of matrix multiplication that $$C \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} = a_1 x_1 + a_2 x_2 + \ldots + a_n x_n,$$ which is to say, an arbitrary linear combination of the columns of $C$. Let's assume this combination happens to equal $0$, for the purposes of testing linear independence. We must show that $a_1 = a_2 = \ldots = a_n$. But then, if $C^{-1}$ exists, we have $$C \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} = 0 \implies C^{-1} C \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} = C^{-1} 0 \implies \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} = 0,$$ which indeed implies $a_1 = a_2 = \ldots = a_n$. Thus, if $C$ is invertible, the columns are linearly independent, and hence the column rank is $n$.

On the other hand if the column rank of $C$ is $n$, then the column vectors are linearly independent. Constructing an inverse matrix from this information is not exactly trivial, so you may want to use a result here. I've babbled on enough for now, so I'll leave you with a sketch:

Because the columns are linearly independent, and there are $n$ of them, they must form a basis, and hence are spanning. In particular, there must be a linear combination of them that produces the standard basis vectors. Each linear combination is expressed in the form $Cv$, where $v$ is a column vector. Let $v_1, \ldots, v_n$ such that $Cv_i = e_i$, where $e_i$ is the $i$th standard basis vector (as a column). Then, the matrix with columns $v_i$ is the inverse $C^{-1}$, proving $C$ is invertible (again, I would probably use a theorem here).

Hope that helps.

Related Question