I found this exercise in Artin. It asks me to prove that $\det A= \det A^T$ where $A^T$ is the transpose of the matrix $A$.
Can anyone please comment whether my proof is correct or not?
Attempted solution: If $\det A=0$, the $A$ is non-invertible. We know that a matrix is invertible iff $A^T$ is invertible. As $A$ is non-invertible, so is $A^T$ and therefore $\det A^T=0$.
If the matrix is invertible, then $A=E_rE_{r-1}\dots E_1$ for a finite sequence of elementary row operations, $E_i$. Using the fact that $(AB)^T=B^TA^T$ and using induction, we infer that $A^T=E_1^TE_2^T\dots E_r^T$. So, $\det A^T=\det (E_1^T)\det(E_2^T)\dots \det(E_r^T)$.As $E_i^T$ is an elementary row-operation as well of the same kind, $\det E_i^T=\det E_i$. Using that, $\det A^T=\det (E_1E_2\dots E_r)=\det A$.
Best Answer
While your proof is basically correct, I would not consider it my favourite proof of this fact, for the following reasons:
How I would prove this depends on the definition of the determinant has been given. If it is defined using the Leibniz formula (which in my opinion is the right definition to give, although a different motivation should first be given), then the proof just amounts to showing that every permutation has the same sign as its inverse, which is quite elementary (and can by the way be done in a similar way as your proof, but using a decomposition into transpositions).
If on the other hand one has defined the determinant as unique $n$-linear alternating function on an $n$ dimensional vector space taking value $1$ on a given ordered basis, (which still requires using Leibniz or a substitute for proving existence) then invariance under transposition is harder to see. Here one needs to pass to the dual vector space, but it seems necessary to use the fact that if $v_1,\ldots,v_n$ are vectors and $\alpha_1,\ldots,\alpha_n$ are linear forms, then the determinant $$ \begin{vmatrix}\alpha_1(v_1)&\ldots&\alpha_1(v_n)\\ \vdots&\ddots&\vdots\\\alpha_n(v_1)&\ldots&\alpha_n(v_n)\end{vmatrix} $$ is not only $n$-linear and alternating in $v_1,\ldots,v_n$ for fixed $\alpha_1,\ldots,\alpha_n$ (as is clear from the definition) but also $n$-linear and alternating in $\alpha_1,\ldots,\alpha_n$ (for fixed $v_1,\ldots,v_n$); I can see no easy argument for this, other than using the Leibniz formula. Once this is established, it is easy to show that a linear operator $f$ induces the same scalar factor on $n$-linear forms when applied to $n$-uplets of vectors as it does when applied (on the right) to $n$-uplets of linear forms, in other words its determinant is the same as that of its transpose (in essence, both cases correspond to inserting an $f$ between each $\alpha_i$ and $v_j$ in the above determinant).
This still does not provide a context in which using row operations to prove this fact would be a natural choice. It is not clear to me from the question whether Artin actually suggests this, and if so why. The only reason I can think of using such an argument is when somebody commands thou shall not use the Leibniz formula.