Well, basically the title already captures my question. Given any matrix $M\in M_n(\mathbb{C})$ that satisfies $MM^T=M^TM$ one can follow that M is already diagonalizable. Here I mean the transpose of a matrix not the conjugate transpose. The argument for normal matrices is not that difficult but i dont know how to adapt it to this problem. Any help is apprechiated.
Linear Algebra – How to Diagonalize Matrices That Commute with Their Transpose
diagonalizationlinear algebra
Related Solutions
The proof of this property is not so easy as those of the basic properties of eigenvalues and eigenvectors. It can be shown by induction, or by explicit construction (see eg here)
I like to visualize the property in this way:
We know that an hermitian matrix with $n$ distict eigenvalues has $n$ eigenvectors that are not only LI (as in general matrices) but, more than that, orthogonal. We also know that this matrix is diagonalizable, with unitary $U$ (both properties are easy to prove).
Now, if our hermitian matrix happens to have repeated (degenerate) eigenvalues, we can regard it as a perturbation of some another hermitian matrix with distinct eigenvalues. By a continuity argument, we should see that the matrix perturbation than transforms different (but perhaps close) eigenvalues into coincident ones, cannot make the orthogonal eigenvectors linearly dependent.
Put in other way: an hermitian matrix $A$ with repeated eigenvalues can be expressed as the limit of a sequence of hermitian matrices with distinct eigenvalues. Because all members of the sequence have $n$ orthogonal eigenvectors, by a continuity argument, they cannot end in LD eigenvectors.
This approach leads to a nice intuition, IMO, and it can be formalized. But for a formal proof the other methods are to be preferred.
$M$ can be diagonalized iff the minimal polynomial $m$ for $M$ splits completely into linear, non-repeated factors $m(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_2)(\cdots)(\lambda-\lambda_N)$. The usual proof of this involves the unique ($N-1$)-st order polynomials $p_{k}$ such that that $p_{k}(\lambda_{j})=\delta_{j,k}$. Then $\sum_{k=1}^{N}p_{k}\equiv 1$ because the sum is an $(N-1)$-st order polynomial which is $1$ in $N$ places. Therefore, $$ I = p_1(M)+p_2(M)+\cdots+p_N(M). $$ Furthermore $p_j(M)p_k(M)=0$ for $j \ne k$ because $m$ divides $p_j p_k$ for $j \ne k$. Therefore each $p_j(M)$ is a projection matrix; to see this, apply $p_j(M)$ to the above identity: $$ p_j(M)=p_j(M)^{2}. $$ Furthermore $(M-\lambda_k I)p_k(M)=0$ which implies $$ M = \lambda_1 p_1(M)+\lambda_2 p_2(M)+\cdots+ \lambda_N p_N(M). $$ If $M_1,M_2,M_3,\cdots,M_J$ are commuting diagonalizable matrices, you can perform the above construction for each $M_j$ in order to obtain eigenvalues $\lambda_{j,1},\lambda_{j,2},\cdots,\lambda_{j,K_{j}}$ and polynomials $p_{j,1},p_{j,2},\cdots,p_{j,K_j}$ for each $1 \le j \le J$. Because the $M_j$ commute, then the same is true of all of the $p_{j,k}(M_j)$. Now form all of the distinct products $$ P_{k_1,k_2,\cdots,k_J}=p_{1,k_1}(M_1)p_{2,k_2}(M_2)\cdots p_{L,k_J}(M_J). $$ The sum of all such products is $I$, and every such $P$ is a projection. Discard the products that turn out to $0$. Because the order of the factors may be rearranged without changing $P$, it follows that $$ (M_{j}-\lambda_{j,k_{j}}I)P_{k_1,k_2,\cdots,k_J}=0,\;\;\; 1 \le j \le J. $$ So there are non-zero projections $Q_{1},Q_{2},\cdots,Q_{m}$ whose sum is $I$, whose products are $0$ for distinct factors, and such that every $M_{j}$ is a scalar multiple of the identity on the range of a given $Q_{k}$. Choose a basis for each of the ranges of $Q_{j}$. Combining these bases produces a basis with respect to which each $M_j$ has a diagonal representation.
Best Answer
You cannot prove it because it is false. Consider $v=(1,i)^T$ and $M=vv^T$. Since $M$ is symmetric, we of course have $MM^T=M^TM$. However, as $v^Tv=0$, $M$ is a nonzero nilpotent matrix. Hence it is not diagonalisable.