1) the set of diagonal matrices is path connected: if $A=\sum a_j E_{jj}$, $B=\sum b_j E_{jj}$ we take the map $t\mapsto \sum (ta_j+(1-t)b_j) E_{jj}$, $t\in[0,1]$.
2) The set of unitaries is path connected. If $U,V$ are two unitaries, we can always write them as $U=e^{iA}$, $V=e^{iB}$ with $A,B$ hermitian. Then we can consider the map $t\mapsto e^{itA}e^{i(1-t)B}$, $t\in[0,1]$ which gives a path from $U$ to $V$ within the unitary group.
3) The set of invertible hermitian matrices with positive eigenvalues is path connected. If $A,B$ are like that, then $A=UD_AU^*$, $B=VD_BV^*$. By parts 1) and 2), there exist continuous $f,g:[0,1]\to M_n(\mathbb{C})$ with $f(0)=D_A$, $f(1)=D_B$, $g(0)=U$, $g(1)=V$. Then $t\mapsto g(t)f(t)g(t)^*$ is continuous and takes $A$ to $B$. Note that the way that $f$ is defined guarantees that $f(t)$ will have positive eigenvalues for all $t\in[0,1]$.
4) GL$_n(\mathbb{C})$ is connected: Given $A,B$ invertible, we can write them as $A=RU$, $B=SV$ with $R,S$ hermitian and positive, and $U,V$ unitaries. By 3) and 2) we can find continuous functions $f,g:[0,1]\to M_n(\mathbb{C})$ with $f(0)=R$, $f(1)=S$, $g(0)=U$, $g(1)=V$. Then the map $t\mapsto f(t)g(t)$ is a continuous path from $A$ to $B$ (note that $f(t)$ and $g(t)$ are invertible for every $t\in[0,1]$ and then so is their product).
It only remains to justify the polar decomposition $A=RU$. An easy way to see this is by using the singular value decomposition. We write $A=WDV$, with $W,V$ unitaries and $D$ diagonal with non-negative entries (positive if $A$ is invertible). Then we can write
$$
A=(WDW^*)WV=RU,
$$
where $R=UDU^*$ is hermitian with positive eigenvalues (because $D$ is), and $U=WV$ is a unitary.
The proof of this property is not so easy as those of the basic properties of eigenvalues and eigenvectors. It can be shown by induction, or by explicit construction (see eg here)
I like to visualize the property in this way:
We know that an hermitian matrix with $n$ distict eigenvalues has $n$ eigenvectors that are not only LI (as in general matrices) but, more than that, orthogonal. We also know that this matrix is diagonalizable, with unitary $U$ (both properties are easy to prove).
Now, if our hermitian matrix happens to have repeated (degenerate) eigenvalues, we can regard it as a perturbation of some another hermitian matrix with distinct eigenvalues.
By a continuity argument, we should see that the matrix perturbation than transforms different (but perhaps close) eigenvalues into coincident ones, cannot make the orthogonal eigenvectors linearly dependent.
Put in other way: an hermitian matrix $A$ with repeated eigenvalues can be expressed as the limit of a sequence of hermitian matrices with distinct eigenvalues. Because all members of the sequence have $n$ orthogonal eigenvectors, by a continuity argument, they cannot end in LD eigenvectors.
This approach leads to a nice intuition, IMO, and it can be formalized. But for a formal proof the other methods are to be preferred.
Best Answer
The orthogonal projections on eigenspaces of $A$ and of $B$ can be written as polynomials in $A$ and $B$ respectively, so they commute with each other and with $A$ and $B$. The nonzero products of an orthogonal eigenspace projection for $A$ and an eigenspace projection for $B$ are orthogonal projections on subspaces of ${\mathbb C}^n$ where $A$ and $B$ both act as multiples of the identity matrix. Take an orthonormal basis whose members are all in those subspaces, and the matrices for $A$ and $B$ in that basis will both be diagonal.