That is a false assumption since a (nXn matrix) a square matrix needs to have at least n different eigenvalues (to make eigenvectors from)
No, they need not be different, as you yourself have noticed (the identity matrix example) and others have also pointed it out.
You can write each matrix in the Jordan normal form:
$$A = S J S^{-1}, \quad J = \operatorname{diag}(J_1, J_2, \dots, J_m),$$
where
$$J_k = \begin{bmatrix} \lambda_k & 1 \\ & \ddots & \ddots \\ & & \lambda_k & 1 \\ & & & \lambda_k \end{bmatrix}.$$
A matrix $A$ is diagonalizable if and only if all $J_k$ are of order $1$, i.e., $J_k = \begin{bmatrix} \lambda_k \end{bmatrix}$. In other words, $m = n$ and
$$J = \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n).$$
Now, if $A$ has only one eigenvalue, that means that $\lambda := \lambda_1 = \lambda_2 = \cdots = \lambda_n$, so
$$J = \operatorname{diag}(\lambda, \lambda, \dots, \lambda) = \lambda \operatorname{diag}(1, 1, \dots, 1) = \lambda {\rm I}.$$
Now, let us get back to $A$:
$$A = S J S^{-1} = S (\lambda {\rm I}) S^{-1} = \lambda S S^{-1} = \lambda {\rm I}.$$
So, $A$ is diagonalizable with only one eigenvalue if and only if it is a scalar matrix.
However, not every matrix with only one eigenvalue is diagonalizable. Just put $\lambda := \lambda_1 = \lambda_2 = \cdots = \lambda_m$, but let some of $J_k$ be of order strictly bigger than $1$. For example,
$$B = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$$
is not diagonalizable.
The method posted in the comment above gives an easy answer: if
$A = S D S^{-1}$,
then
$A^{-1} = (SDS^{-1})^{-1} = (S^{-1})^{-1} D^{-1} S^{-1} = S D^{-1} S^{-1}$.
Since $A$ and $S$ are invertible, so is $D = \operatorname{diag}(\lambda_1,\ldots,\lambda_n)$, and then $D^{-1} = \operatorname{diag}(\lambda_1^{-1},\ldots,\lambda_n^{-1})$. So $A^{-1}$ is diagonalizable. The other direction is, according to taste, either entirely similar or actually deduced from what we already did by plugging in $A^{-1}$ in place of $A$.
However, I want to answer the question in a slightly less easy way but which gives more. The calculation actually shows that the same matrix $S$ diagonalizes both $A$ and $A^{-1}$. (One says that $A$ and $A^{-1}$ are simultaneously diagonalizable.) What does that mean? A matrix $S$ diagonalizes a matrix $A$ if and only if the columns of $S$ are eigenvectors for $A$, so we're seeing that $A$ and $A^{-1}$ admit a common basis of eigenvectors. But that suggests an even stronger fact.
Proposition: Let $A$ be any invertible matrix.
a) A nonzero vector $v \in V$ is an eigenvector for $A$ if and only if it is an eigenvector for $A^{-1}$.
b) More precisely, zero is not an eigenvalue of either $A$ or $A^{-1}$ because they are invertible, and for any nonzero $\lambda$, the $\lambda$-eigenspace for $A$ is the $\lambda^{-1}$-eigenspace for $A^{-1}$.
In particular the sum of the dimensions of the eigenspaces for $A$ is always equal to the sum of the dimensions of the eigenspaces for $A^{-1}$. The case where these dimensions sum to $n$ recovers the case asked by the OP.
Best Answer
If $A \neq I$, then there is a (non-zero) vector $v$ such that $v \neq Av$, which is to say $v-Av \neq 0$. But then we have $$ A(v-Av) = Av - A^2v = Av - Iv = Av - v = -(v-Av) $$ which means that $v-Av$ is a non-zero eigenvector of $A$ with eigenvalue $-1$. Thus we have proven $$ A \neq I \implies A\text{ has } -1\text{ as an eigenvalue} $$ which is the contraposition of what we were asked to prove, and we are therefore done.