Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.
Let us say that a linear endomorphism $f:V\to V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1\oplus U_2$, $f(U_1)\subseteq U_1$ and $f(U_2)\subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)
Now it is not hard to prove the following:
Lemma 1. If $f:V\to V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $n\geq1$ and nonzero subspaces $U_1$, $\dots$, $U_n$ of $V$ such that $V=\bigoplus_{i=1}^nU_i$, $f(U_i)\subseteq U_i$ for all $i\in\{1,\dots,n\}$ and for each such $i$ the restriction $f|_{U_i}:U_i\to U_i$ is indecomposable.
Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.
This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.
There is a general fact that comes useful at times:
Lemma. If $h:V\to V$ is an endomorphism of a finite dimensional vector space, then there exists an $m\geq1$ such that $V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$.
I'll leave its proof as a pleasant exercise.
So let us fix an indecomposable endomorphism $f:V\to V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $v\in V$ and a scalar $\lambda\in k$ such that $f(v)=\lambda v$. Consider the map $h=f-\lambda\mathrm{Id}:V\to V$: we can apply the lemma to $h$, and we conclude that $V=\ker h^m\oplus\def\im{\operatorname{im}}\im h^m$ for some $m\geq1$. moreover, it is very easy to check that $f(\ker h^m)\subseteq\ker h^m$ and that $f(\im h^m)\subseteq\im h^m$. Since we are supposing that $f$ is indecomposable, one of $\ker h^m$ or $\im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $\im h^m$, and we see that $\ker h^m=V$.
This means, precisely, that $h^m:V\to V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $k\geq1$, and let $w\in V$ be a vector such that $h^{k-1}(w)\neq0=h^k(w)$.
Lemma. The set $\mathcal B=\{w,h(w),h^2(w),\dots,h^{k-1}(w)\}$ is a basis of $V$.
This is again a nice exercise.
Now you should be able to check easily that the matrix of $f$ with respect to the basis $\mathcal B$ of $V$ is a Jordan block.
In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.
There are several mistakes in what you wrote but the critical mistake is that you claim that $m(\lambda) = n(\lambda)$. It is not true that the index at which $\ker (A - \lambda I)^k$ stabilizes is the algebraic multiplicity of $\lambda$. For example, consider the nilpotent matrix
$$ A = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}. $$
The characteristic polynomial of $A$ is $x^3$ so $m(0) = 3$ while we have $A^2 = 0$ so $n(0) = 2$. This is the phenomenon which causes the appearance of several Jordan blocks because if you pick $x \in \mathbb{R^3} \setminus \ker(A)$ (for example $x = e_2$) then $\{ x, Ax \}$ will be linearly independent but $A^2x = 0$ so you don't have enough vectors to form a basis and you'll need to adjoin another block.
Best Answer
Consider a $3\times 3$ Jordan block $$\begin{pmatrix}\lambda&0&0\\ 1&\lambda&0\\ 0&1&\lambda \end{pmatrix}.$$ Now subtract the matrix $\lambda I$ to get the matrix $$\begin{pmatrix}0&0&0\\ 1&0&0\\ 0&1&0\end{pmatrix}.$$ Can you see why the kernel of this matrix has dimension $1$? And if we subtract $\mu I$, where $\mu\neq\lambda$, we get the matrix $$\begin{pmatrix}\lambda-\mu&0&0\\ 1&\lambda-\mu&0\\ 0&1&\lambda-\mu\end{pmatrix}.$$ Can you see why the kernel of this matrix has dimension $0$?
This applies to all Jordan blocks. The kernel of $J_\lambda-\lambda I$ for any Jordan block $J_\lambda$ has dimension $1$, always. And the kernel of $J_\lambda-\mu I$ where $\mu\neq0$ has dimension $0$, always. So if we have a block diagonal matrix consisting of only Jordan blocks, each block with eigenvalue $\lambda$ contributes exactly one dimension to the kernel of $T-\lambda I$, and there are no other contributions from the rest of the blocks. So $\dim\ker(T-\lambda I)$ is the number of blocks corresponding to the eigenvalue $\lambda$.