I am interested in the case that $A$ is a matrix over a commutative ring, not necessarily a field. Is it still true that if $Ax = b$ has a solution for every $b$, then $A$ is invertible? I know that in the general setting, $A$ having the trivial nullspace does not imply that it is invertible. However, I cannot seem to find a counterexample to the fact in the title of the question, so I am starting to believe it is true. Any ideas how to prove it?
Linear Algebra – Proving Invertibility of Matrix A
linear algebra
Related Solutions
I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.
Let $A$ be an $m\times n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $x\in R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.
We want to show that $Ax=0$ has a non-trivial solution if and only if $k\lt n$.
If $k=0$, there is a non-zero element $r\in R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution
$$A\pmatrix{r\\\vdots\\r}=0\;.$$
Now assume $0\lt k\lt n$. If $m\lt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $k\lt n\le m$. There is some non-zero element $r\in R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.
In summary, if $k\lt n$, there is a non-trivial solution to $Ax=0$.
Now assume conversely that there is such a solution $x$. If $n\gt m$, there are no minors of dimension $n$, so $k\lt n$. Thus we can assume $n\le m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $\det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $k\lt n$.
Specializing to the case $m=n$ of square matrices, we can conclude:
A system of linear equations $Ax=0$ with a square $n\times n$ matrix $A$ over a commutative ring $R$ has a non-trivial solution if and only if its determinant (its only minor of dimension $n$) is annihilated by some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.
The field is irrelevant. Let $\{Bx_1,\ldots,Bx_k\}$ be a basis of the column spaces of $B$. Since $A, B$ have the same null space, every nontrivial linear combination of $\{Ax_1,\ldots,Ax_k\}$ must be a nonzero vector. In other words, $\{Ax_1,\ldots,Ax_k\}$ is a linearly independent set. Therefore there exists a bijective linear transformation $P$ (defined on $K^m$) that maps each $Ax_j$ to $Bx_j$. As $A$ and $B$ have identical null spaces, it follows that $PA=B$.
Related Question
- [Math] How to prove that If A is invertible then $A^{-1}$ and $A^2$ are invertible.
- [Math] Relationship between null-space and determinant
- Over a commutative unital ring, does $\det A = 0$ imply that $A\mathbf{x} = 0$ has a non-zero solution
- Proving there is no matrix in $\mathbb{F}_2^{2\times2}$ that commutes with every invertible matrix
Best Answer
For each standard basis vector $e_1, \dots, e_n$, we have some $b_1, \dots, b_n$ such that $A b_i = e_i$, respectively. Then simply 'squishing' the $b_1, \dots, b_n$ together into a $n \times n$ matrix directly gives $A^{-1}$.