It seems that you are done. Let
$ \det(sI-A)=a(s)=s^n+a_{n-1}s^{n-1}+ \cdots + a_1 s+a_0 $ be the characteristic polynomial. We know by assumption that it has exactly $n$ distinct roots, namely: $\lambda_1,\dots,\lambda_n$.
Let $ \Lambda = \begin{pmatrix}
\lambda_1 & 0 & \cdots & 0 \\
0 & \lambda_2 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & \lambda_n
\end{pmatrix}\text{ }$ be the diagonal matrix whose diagonal entries are the roots of $a(s)$.
We want to show that $ \Lambda^n + a_{n-1}\Lambda^{n-1}+ \cdots + a_0 I = 0. $ Since
$$\Lambda^k = \begin{pmatrix}
\lambda_1^k & 0 & \cdots & 0 \\
0 & \lambda_2^k & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & \lambda_n^k
\end{pmatrix},$$
thus $$\Lambda^n + a_{n-1}\Lambda^{n-1}+ \cdots + a_0 I=
\begin{pmatrix}
a(\lambda_1) & 0 & \cdots & 0 \\
0 & a(\lambda_2) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & a(\lambda_n)
\end{pmatrix}=
\begin{pmatrix}
0 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & 0
\end{pmatrix}$$
because $\lambda_i$ are roots of $a(s)$.
The second part asks to prove the theorem for a matrix $A$ similar to $\Lambda$, i.e.
$$ A^n + a_{n-1}A^{n-1}+ \cdots + a_0 I = 0 .$$
Thus $A = T ΛT^{−1}$; where $Λ$ is diagonal, by definition of similarity; and
$$ A^k = T\Lambda^k T^{-1} \quad \forall k\in \Bbb N.$$
Finally: $$A^n + a_{n-1}A^{n-1}+\cdots+a_0 I = 0\Longleftrightarrow T \Lambda^n T^{-1} + a_{n-1}T \Lambda^{n-1} T^{-1}+\cdots + a_0 TT^{-1}= 0 $$
$$\Longleftrightarrow T( \Lambda^n + a_{n-1} \Lambda^{n-1} +\cdots + a_0I)T^{-1}= 0 \Longleftrightarrow \Lambda^n + a_{n-1} \Lambda^{n-1} +\cdots + a_0I =T^{-1}0T=0.$$
Your proof for part I is good, in that it's quick and relatively clean. The actual write up could use a little touching up, but the thrust is good. The only alternate proof I'd suggest is a more general one, since this particular result holds for more than just two vectors. That is, one can show that if we have $m$ eigenvectors in $m$ distinct eigenspaces, then they are automatically linearly independent. This takes more time, so I would probably reach for your argument unless I needed the more general result.
Suppose that $v_1, \ldots, v_m$ are eigenvectors corresponding to distinct eigenvalues $\lambda_1, \ldots, \lambda_m$. We wish to show that $v_1, \ldots, v_m$ are linearly independent, which we can do so by induction.
If $m = 1$, then we have one (non-zero) eigenvector $v_1$, so we are done.
Suppose that we know $v_1, \ldots, v_k$ is linearly independent for $1 \le k < m$. Then, the only way we can have $v_1, \ldots, v_{k+1}$ be linearly dependent is if $v_{k+1} \in \operatorname{span}\{v_1,\ldots, v_k\}$, i.e.
$$v_{k+1} = a_1 v_1 + \ldots + a_k v_k$$
for some $a_1, \ldots, a_k$. Now, apply $T - \lambda_{k+1} I$ to both sides (note: it annihilates the left hand side). We get:
\begin{align*}
0 &= T(a_1 v_1) - a_1 \lambda_{k+1} v_1 + \ldots + T(a_k v_k) - a_k \lambda_{k+1} v_k \\
&= a_1 \lambda_1 v_1 - a_1 \lambda_{k+1} v_1 + \ldots + a_k \lambda_k v_k - a_k \lambda_{k+1} v_k \\
&= a_1(\lambda_1 - \lambda_{k+1}) v_1 + \ldots + a_k (\lambda_k - \lambda_{k+1}) v_k.
\end{align*}
This is a linear combination of the linearly independent $v_1, \ldots, v_k$, so we must have
$$a_1(\lambda_1 - \lambda_{k+1}) = \ldots = a_k(\lambda_1 - \lambda_{k+1}) = 0.$$
But, the eigenvalues are distinct, so we can divide through by $\lambda_i - \lambda_{k+1} \neq 0$, giving us
$$a_1 = \ldots = a_k = 0,$$
which in turn implies that $v_{k+1} = 0$, which contradicts $v_{k+1}$ being an eigenvector. So, $v_{k+1} \notin \operatorname{span}\{v_1, \ldots, v_k\}$, and so $v_1, \ldots, v_{k+1}$ is also linearly independent.
As you can see, it's a lot more work! But it's worth it, if you care about this result in a more general setting.
For part II, I would very simple conclude that $T(v_1)$ and $T(v_2)$ are also eigenvectors corresponding to $\lambda_1$ and $\lambda_2$ respectively (as they are just non-zero multiples of the original eigenvectors. So, the result from part I still applies, and $T(v_1), T(v_2)$ are linearly independent.
Best Answer
If $A$ is an $n \times n$ matrix, the only possible way in which the eigenvalues could fail to be unique is if they depend on the choice of basis. Therefore, let $B := U^{-1} A U$, where $U$ is an invertible matrix. Then see here for the uniqueness of eigenvalues: Similar matrices have the same eigenvalues with the same geometric multiplicity