Since all $v\in V$ are eigenvectors, we can choose $e_i$, the $i$th unit vector. Then by assumption we have $T e_i = \lambda_i e_i$ for some $\lambda_i$. It follows that $T$ is diagonal, with elements $\lambda_1,...,\lambda_n$ on the diagonal.
Now choose $v=e_1+...+e_n$, again for some $\lambda$, we have $Tv=\lambda v$, so we have
$$T v = T(e_1+...+e_n) = \lambda_1 e_n +... + \lambda_n e_n = \lambda (e_1+...+e_n).$$
Since the $e_i$ are linearly independent, it follows that $\lambda = \lambda_1 = ... =
\lambda_n$. Hence $Tx = \lambda x$, $\forall x$.
There is a slight problem with notation, in that you are writing $\lambda_1\tilde{\lambda_2}\otimes (e_1,e_2)$ when you should be writing $\lambda_1\tilde{\lambda_2}(e_1\otimes e_2)$, etc. However, this is easy to fix without fundamentally changing the proof.
Here is a higher level approach that relies on properties of tensor products, and has some connections with some ideas you will see later.
Suppose that $e_1\otimes e_2 + e_2\otimes e_1=v\otimes w$ for some vectors $v,w\in V$. Then for any bilinear map $\psi$, we must have $\psi(v,w)=\psi(e_1,e_2)+\psi(e_2,e_1)$. So all we must do is find the right bilinear maps to get a contradictory set of conditions on $v$ and $w$.
Fix some projection map $P:V\to \mathbb F^2$ with $P(e_1)=e_1, P(e_2)=e_2$. Then $P\otimes P:V\otimes V\to \mathbb F^2\otimes \mathbb F^2$, and so $P(v)\otimes P(w)=e_1\otimes e_2 + e_2\otimes e_1$. Therefore, we can reduce to the case that $V$ is $\mathbb F^2$.
We could use bilinear maps to pick off the coefficients and obtain a contradiction that way, mimicking the original proof. However, for variety's sake, here is another more geometric approach.
Consider the bilinear map $(v,w)\mapsto \det(v|w)$ where $(v|w)$ is the matrix with columns $v, w$. Then since $\det(e_1|e_2)=-\det(e_2|e_1)$, we must have $\det(v|w)=0$, and so $v, w$ are linearly dependent, so $w=\alpha v$ for some $\alpha \in \mathbb F^{\times}$. In higher level language, what we have shown is a shadow of the fact that since $e_1\wedge e_2+e_2\wedge e_1=0$, then $v\wedge w=0$, and so $v$ and $w$ are linearly dependent. This more general fact doesn't require using the projection to two dimensions.
Define the map $S_c(e_1)=c e_2, S_c(e_2)=e_1$ for $c\in \mathbb F$. By taking $\phi(v,w)=\det(v|S(w))$ and noting that $\det(e_1|e_1)+\det(e_2|c e_2)=0$, the exact same argument as above shows that $v$ and $S_c(w)$ are linearly dependent. This means that $w$ and $S_c(w)$ are also dependent (since $v\neq 0$), so $w$ is an eigenvector of $S_c$ for every $c$. But this is impossible.
Best Answer
What you have done is correct. More simply the kernel of $f$ has co-dimension $1$ unless $f \equiv 0$. So any $V$ whose co-dimension is more than $1$ would give a counter-example.