The matrices $AMA^{-1}$ and $M^2$ have the same eigenvalues. In particular, they have the same non-null eigenvalues. Therefore, if $\lambda$ is a non-null eigenvalue of $M$, then so is $\lambda^2$. But then so is $\lambda^4$ and so on. On the other hand, $M$ has only a finite number of eigenvalues. Therefore $\lambda^{2^k}=\lambda^{2^l}$ for some $k,l\in\mathbb{N}$ with $k>l$. So, $\lambda^{2^l}(\lambda^{2^k-2^l}-1)=0$. But $\lambda^{2^l}\neq0$ and therefore $\lambda^{2^k-2^l}=1$.
I don't think a convention is well-established: in some contexts, I see "different eigenvalues" refer to a set of distinct values with associated algebraic multiplicities, while in other contexts, I see "different eigenvalues" refer to the set of $n$ eigenvalues, possibly with repetitions due to multiplicity. Typically one can either discern which convention is being used, or the author should take care to clarify what is meant.
In your case, I think you just have to read the definition of "dominant eigenvalue" carefully. Based on the problem writing "dominant eigenvalue $\lambda_1$," I suspect the definition is written as
if $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $A$, then $\lambda_1$ is considered dominant if $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$
or something like that, which is unambiguous compared to
$|\lambda| > |\gamma|$ for all other eigenvalues $\gamma$
which is very ambiguous for the reasons you raise.
Now that we know that the context of your question is the power method, then my above guess on what "dominant eigenvalue" means is incorrect.
Let $\lambda_1, \ldots, \lambda_m$ be the distinct eigenvalues of $A$ with multiplicities $n_1, \ldots, n_m$. If $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$, then $\lambda_1$ is said to be the dominant eigenvalue. The power method will converge to something in the eigenspace corresponding to $\lambda_1$. To ensure that it does not converge to zero, the initial vector must not be orthogonal to the eigenspace.
Best Answer
I'd go for a direct approach:
Choose a basis $\{w_1, \dots ,w_k\}$ of $\text{im}(f)$ and extend this to a basis $\beta=\{w_1, \dots, w_k,v_{k+1}, \dots ,v_n\}$ of $V$. Write $A$ for the matrix of $f$ w.r.t. the basis $\beta$. Clearly $A$ is of the form $$\begin{pmatrix}*&*&\dots &*&\bullet&\dots &\bullet\\*&*&\dots &*&\bullet&\dots &\bullet\\ \vdots&\vdots& & \vdots&\vdots&&\vdots\\*&*&\dots &*&\bullet&\dots &\bullet\\0&0&\dots &0&0&\dots&0\\\vdots&\vdots& & \vdots&\vdots&&\vdots\\0&0&\dots &0&0&\dots&0 \end{pmatrix}$$ The result follows by examining this shape.