As anon points you out, try to use $\overline{z\cdot w} = \overline{z}\cdot \overline{w}$ and simplify notation.
For instance, you could just write vectors in $\mathbb{C}^2$ like this: $(z_1, z_2)$, with $z_1, z_2 \in \mathbb{C}$.
Second, I would try to convince myself that that product/conjugacy rule works too for products of matrices and vectors. That is:
$$
\overline{A\cdot v} = \overline{A}\cdot \overline{v} \ ,
$$
for, say, $A$ a $2\times 2$ complex matrix and $v\in \mathbb{C}^2$. What would happen if $A$ had real coefficitients?
Finally, I would write the equality I already know, namely
$$
A \cdot Y_0 = \lambda Y_0 \ .
$$
Staring at it should force inspiration to come. :-)
I don't think a convention is well-established: in some contexts, I see "different eigenvalues" refer to a set of distinct values with associated algebraic multiplicities, while in other contexts, I see "different eigenvalues" refer to the set of $n$ eigenvalues, possibly with repetitions due to multiplicity. Typically one can either discern which convention is being used, or the author should take care to clarify what is meant.
In your case, I think you just have to read the definition of "dominant eigenvalue" carefully. Based on the problem writing "dominant eigenvalue $\lambda_1$," I suspect the definition is written as
if $\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $A$, then $\lambda_1$ is considered dominant if $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$
or something like that, which is unambiguous compared to
$|\lambda| > |\gamma|$ for all other eigenvalues $\gamma$
which is very ambiguous for the reasons you raise.
Now that we know that the context of your question is the power method, then my above guess on what "dominant eigenvalue" means is incorrect.
Let $\lambda_1, \ldots, \lambda_m$ be the distinct eigenvalues of $A$ with multiplicities $n_1, \ldots, n_m$. If $|\lambda_1| > |\lambda_i|$ for all $i \ne 1$, then $\lambda_1$ is said to be the dominant eigenvalue. The power method will converge to something in the eigenspace corresponding to $\lambda_1$. To ensure that it does not converge to zero, the initial vector must not be orthogonal to the eigenspace.
Best Answer
Not true in general. For example, if $A=\begin{pmatrix}1&1\end{pmatrix}$ then:
$$AA^T=\begin{pmatrix}1&1\end{pmatrix}\begin{pmatrix}1\\1\end{pmatrix}=\begin{pmatrix}2\end{pmatrix}$$ has eigenvalue $2$ and
$$A^TA=\begin{pmatrix}1\\1\end{pmatrix}\begin{pmatrix}1&1\end{pmatrix}=\begin{pmatrix}1&1\\1&1\end{pmatrix}$$
has eigenvalues $0$ and $2$.
It is true that they share the same non-zero eigenvalues.
More generally, if $A$ is an $m\times n$ matrix, and $B$ is an $n\times m$ matrix, then the non-zero eigenvalues of $AB$ are the same as the non-zero eigenvalues of $BA.$
A fun approach uses mimimal polynomials.
First, note that $B(AB)^k=(BA)^kB$ and $A(BA)^k=(AB)^kA$.
Thus, for any polynomial, $f$, we have $Bf(AB)=f(BA)B$ and $Af(BA)=f(AB)A.$
Let $p$ be the minimal polynomial for $AB$ and $q$ be the minimal polynomial for $BA$.
Then $ABq(AB)=Aq(BA)B=0$, so $AB$ is a root of $xq(x),$ and similarly $BA$ is a root of $xp(x).$
But this means that $p(x)\mid xq(x)$ and $q(x)\mid xp(x)$, which means that $p(x)$ and $q(x)$ share the same non-zero roots, and hence $BA$ and $AB$ share the same non-zero eigenvalues.
This actually says a little more than that they have the same non-zero eigenvalues, since it also means that the multiplicity of the non-zero roots of $p(x)$ and $q(x)$ are the same.