We denote the algebraically closed ground field by $k$ and the given finite-dimensional $k$-vector space by $V$.
There are at least two (equivalent) ways to use induction:
We can construct an increasing chain of subspaces $U_i \subseteq U_{i+1}$ of $V$ which are spanned by eigenvectors of $T$ as follows:
Since $k$ is algebraically closed there exists an eigenvector $v_1 \in V$ for $T$, so $U_1 := \langle v_1 \rangle$ is a $T$-invariant subspace of $V$.
Since $T$ is semisimple there exists a $T$-invariant subspace $W_1 \subseteq V$ with $V = U_1 \oplus W_1$.
Since $k$ is algebraically closed there exists an eigenvector $v_2 \in V$ for the restriction $T|_{W_1}$.
This is also an eigenvector for $T$, so $U_2 := \langle v_1, v_2 \rangle$ is a $T$-invariant subspace of $V$.
Since $T$ is semisimple there exists a $T$-invariant subspace $W_2 \subseteq V$ with $V = U_2 \oplus W_2$.
By continuing this process we get decompositions
\begin{align*}
V = U_1 \oplus W_1 = U_2 \oplus W_2 = U_3 \oplus W_3 = \dotsb
\end{align*}
where the summand $U_i$ are given by $U_i = \langle v_1, v_2, \dotsc, v_i \rangle$ with $v_1, v_2, v_3, \dotsc \in V$ being eigenvectors of $T$.
Since $v_{i+1} \notin U_i = \langle v_1, \dotsc, v_i \rangle$ it follows that these eigenvectors are linearly independent, so $\dim U_i = i$.
It follows that the above process terminates after $n = \dim V$ many steps with $U_n = V$ and $W_n = 0$.
Then $v_1, \dotsc, v_n$ is a basis of $V$ consisting of eigenvectors of $T$.
For the second proof we will need the following lemma:
Lemma: Let $T \colon V \to V$ be semisimple and let $U \subseteq V$ be a $T$-invariant subspace.
Then the restriction $T|_U$ is also semisimple.
Proof: Let $W \subseteq U$ be an invariant subspace.
Then $W$ is also a $T$-invariant subspace of $V$, so there exists some $T$-invariant subspace $W' \subseteq V$ with $V = W \oplus W'$.
Then $W'' := W' \cap U$ is a $T$-invariant subspace of $U$.
We then have that $U = W \oplus W''$, which can be seen as follows:
We have that
$$
W \cap W''
= W \cap W' \cap U
= 0
$$
since $W \cap W' = 0$.
To see that $U = W + W''$ note that we can write $u \in U$ as $u = w + w'$ with $w \in W$ and $w' \in W'$ since $V = W + W'$.
Then $w' = u - w \in U + W \subseteq U$ (here we use that $W$ is a subspace of $U$), so $w' \in W' \cap U = W''$.
We now proceed by induction on $\dim V$.
For $\dim V = 0$ there is nothing to do.
If $\dim V \geq 1$ then there exists an eigenvector $v_1 \in V$ for $T$ since $k$ is algebraically closed.
Then $\langle v_1 \rangle$ is a $T$-invariant subspace, so there exists a $T$-invariant subspace $W \subseteq V$ such that $V = \langle v_1 \rangle \oplus W$.
Note that $\dim W = \dim V - 1$ since the eigenvector $v$ is necessarily non-zero.
The restriction $T|_W$ is also semisimple by the above lemma.
By induction there exists a basis $v_2, \dotsc, v_n$ of $W$ consisting of eigenvectors of $T|_W$.
These are also eigenvectors of $T$, and since $V = \langle v_1 \rangle \oplus W$ we find that $v_1, v_2, \dotsc, v_n$ is a basis of $V$.
Regarding your question:
I understand that we can write $T = \bigoplus_i T_{\lambda_i}$ where $T_{\lambda_i} = T|_{U_{\lambda_i}}$ and $U_{\lambda_i}$ is the eigenspace spanned by one eigenvector, but why does that imply that $T$ is diagonalizable?
I assume that $U$ is the given vector space (which I denoted by $V$) and that $U_{\lambda}$ is the eigenspace of $T$ with respect to the eigenvalue $\lambda$.
If the quoted situation were to happen, then one can pick for each one-dimensional space $U_{\lambda_i}$ some non-zero vector $u_i \in U_{\lambda_i}$.
Since $U_{\lambda_i}$ is one-dimensional we have that $U_{\lambda_i} = \langle u_i \rangle$, so $T u_i \in U_{\lambda_i}$ is a scalar multiple of $u_i$ (from your notation I assume that this scalar should by $\lambda_i$).
The eigenvectors $(u_i)_i$ form a basis since $T = \bigoplus_i T_{\lambda_i}$.
Note however, that the above proofs (as well as the one given on wikipedia, which is essentially the second one), do not give the decomposition $T = \bigoplus_{\lambda \in k} T_\lambda$ where $T_\lambda = T|_{U_\lambda} = \lambda \operatorname{id}_{U_\lambda}$, but instead a decomposition $T = T_1 \oplus \dotsb \oplus T_n$ where $T_i = T|_{U_i}$ for one-dimensional subspaces $U_1, \dotsc, U_n \subseteq V$.
You seem to mix up these two decompositions, but they coincide if and only if every non-trivial eigenspace is one-dimensional, which is not necessarily true.
Best Answer
Your proof is basically correct. Since $0$ is not an eigenvector of $T$ (if this is not obvious to you, you should think about proving it), every eigenvector of $T$ is also an eigenvector of $T^{-1}$. There exists by hypothesis a basis consisting of eigenvectors for $T$, and that basis is also a basis of eigenvectors for$~T^{-1}$.
It depends on how diagonalisable was defined for you. For me it is defined as admitting a basis consisting of eigenvectors. You could also define it by saying that the sum of all eigenspaces is the whole space. Then again, since the eigenspaces of $T$ and of $T^{-1}$ are in fact the same, it is obvious that $T$ diagonalisable implies $T^{-1}$ is. This is not really different, but at least the concept of eigenspaces is mentioned.