Here is another way, (more matrix-heavy way) of proving it:
Outline:
Step 0: Every matrix $T$ associated with a linear operator on a finite-dimensional vector space over an algebraically closed field has at least one eigenpair ($\lambda$,$v$).**
Step 1: If $AB=BA$, then there exists one common eigenvector.
Step 2: By induction over the dimension $n$, we can conclude, the claim using the previous steps.
Proof of Steps:
Step 0:
$det($T$-\lambda I)$ is polynomial with coefficients from the algebraically closed field and thus has a root, which means there is at least one eigenvalue $\lambda$ and associated eigenvector $v$ for $T$.
Step 1:
Consider an eigenpair $(\lambda,v)$, thus $v \in \text{null}(A-\lambda I)$ (which exists, due to Step 0), then it holds $ (A-\lambda I) Bv = B (A-\lambda I) v = 0$, thus B is invariant with respect to $\text{null}(A-\lambda I)$. Thus, the restriction of the linear operator associated with $B$ with respect to $\text{null}(A-\lambda I)$ is a linear operator on $\text{null}(A-\lambda I)$, hence by applying Step 0 again, we know there exists an eigenvector $\tilde{v}$ of $B$ in $\text{null}(A-\lambda I)$ and therefore $\tilde{v}$ is a common eigenvector of $A$ and $B$.
Step 2:
Induction basis: Check it works for matrices with dimension $1\times 1$.
Induction step:
Let $AB=BA$ and be both matrices of $n\times n$ over the algebraically closed field, then we know from Step $1$, they have a common eigenvector $v$. Choose a basis $\{v,w_1,w_2, ...,w_{n-1}\}$ and transform both matrices with respect to this basis to get $\tilde{A}$, $\tilde{B}$, which look like this:
\begin{align}
\tilde{A} = \begin{bmatrix} \lambda_1& a^T_1\\0&\tilde{A}_1\end{bmatrix}
\end{align}
\begin{align}
\tilde{B} = \begin{bmatrix} \lambda_2& b^T_1\\0&\tilde{B}_1\end{bmatrix}
\end{align}
Now since $AB=BA$ then equivalently $\tilde{A}\tilde{B}=\tilde{B}\tilde{A}$, which gives us the following, using their mentioned structure:
\begin{align}
\begin{bmatrix} \lambda_2 \lambda_1& \lambda_2 a^T_1+b^T_1\tilde{A}_1\\0&\tilde{B}_1\tilde{A}_1\end{bmatrix} = \begin{bmatrix} \lambda_1 \lambda_2& \lambda_1 b^T_1+a^T_1\tilde{B}_1\\0&\tilde{A}_1\tilde{B}_1\end{bmatrix}
\end{align}
which implies $\tilde{A}_1\tilde{B}_1=\tilde{B}_1\tilde{A}_1$ for the lower dimensional matrices of $(n-1)\times(n-1)$. Now by using induction, you know that you can find new set of vectors $\{\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ in the span of $\{w_1,w_2, ...,w_{n-1}\}$, which make $\tilde{B}_1,\tilde{A}_1$ simultaneously triangularizable and thus $\{v,\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ makes the original $\tilde{B},\tilde{A}$ simultaneously triangularizable.
Best Answer
Hint: Reduce your inequality to a simpler case as follows: $$ 0 \preceq A \preceq B \iff 0 \preceq B^{-1/2}AB^{-1/2} \preceq B^{-1/2}BB^{-1/2} $$