Here is another way, (more matrix-heavy way) of proving it:
Outline:
Step 0: Every matrix $T$ associated with a linear operator on a finite-dimensional vector space over an algebraically closed field has at least one eigenpair ($\lambda$,$v$).**
Step 1: If $AB=BA$, then there exists one common eigenvector.
Step 2: By induction over the dimension $n$, we can conclude, the claim using the previous steps.
Proof of Steps:
Step 0:
$det($T$-\lambda I)$ is polynomial with coefficients from the algebraically closed field and thus has a root, which means there is at least one eigenvalue $\lambda$ and associated eigenvector $v$ for $T$.
Step 1:
Consider an eigenpair $(\lambda,v)$, thus $v \in \text{null}(A-\lambda I)$ (which exists, due to Step 0), then it holds $ (A-\lambda I) Bv = B (A-\lambda I) v = 0$, thus B is invariant with respect to $\text{null}(A-\lambda I)$. Thus, the restriction of the linear operator associated with $B$ with respect to $\text{null}(A-\lambda I)$ is a linear operator on $\text{null}(A-\lambda I)$, hence by applying Step 0 again, we know there exists an eigenvector $\tilde{v}$ of $B$ in $\text{null}(A-\lambda I)$ and therefore $\tilde{v}$ is a common eigenvector of $A$ and $B$.
Step 2:
Induction basis: Check it works for matrices with dimension $1\times 1$.
Induction step:
Let $AB=BA$ and be both matrices of $n\times n$ over the algebraically closed field, then we know from Step $1$, they have a common eigenvector $v$. Choose a basis $\{v,w_1,w_2, ...,w_{n-1}\}$ and transform both matrices with respect to this basis to get $\tilde{A}$, $\tilde{B}$, which look like this:
\begin{align}
\tilde{A} = \begin{bmatrix} \lambda_1& a^T_1\\0&\tilde{A}_1\end{bmatrix}
\end{align}
\begin{align}
\tilde{B} = \begin{bmatrix} \lambda_2& b^T_1\\0&\tilde{B}_1\end{bmatrix}
\end{align}
Now since $AB=BA$ then equivalently $\tilde{A}\tilde{B}=\tilde{B}\tilde{A}$, which gives us the following, using their mentioned structure:
\begin{align}
\begin{bmatrix} \lambda_2 \lambda_1& \lambda_2 a^T_1+b^T_1\tilde{A}_1\\0&\tilde{B}_1\tilde{A}_1\end{bmatrix} = \begin{bmatrix} \lambda_1 \lambda_2& \lambda_1 b^T_1+a^T_1\tilde{B}_1\\0&\tilde{A}_1\tilde{B}_1\end{bmatrix}
\end{align}
which implies $\tilde{A}_1\tilde{B}_1=\tilde{B}_1\tilde{A}_1$ for the lower dimensional matrices of $(n-1)\times(n-1)$. Now by using induction, you know that you can find new set of vectors $\{\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ in the span of $\{w_1,w_2, ...,w_{n-1}\}$, which make $\tilde{B}_1,\tilde{A}_1$ simultaneously triangularizable and thus $\{v,\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ makes the original $\tilde{B},\tilde{A}$ simultaneously triangularizable.
For any linear transformation $T:X \to Y$, select a basis as follows:
Let $x_{r+1},\dots,x_{n}$ be a basis for the null-space of $T$. Extend to a basis $x_1,\dots,x_n$ of $X$.
Let $y_1,\dots,y_r$ be such that $y_i = T(x_i)$. Extend to a basis $y_1,\dots,y_m$ of $Y$.
With respect to this basis, we write the linear transformation as
$$
\pmatrix{
1\\
&1\\
&&\ddots\\
&&&1\\
&&&&0\\
&&&&&\ddots\\
&&&&&& 0
}
$$
where the number of $1$s is equal to $r$, the rank of the transformation.
Best Answer
Hint: Suppose that $C = A \oplus B: M \oplus N \to M' \oplus N'$. let $x_1,\dots,x_m \in M$ be such that $A(x_1),\dots,A(x_m)$ is a basis of the image of $A$. Let $y_1,\dots,y_n \in N$ be such that $B(y_1),\dots,B(y_n)$ is a basis of the image of $B$.
Show that the set $$ \{(A(x_1),0),(A(x_2),0),\dots,(A(x_m),0),(0,B(x_1)),\dots,(0,B(x_n))\} $$ is a basis of the image of $C$ (which is simply the direct sum of the images of $A$ and $B$).