Let's break this down (I'm going for more of an intuitive answer here):
When you multiply a matrix $M$ and a vector $v_i$, you take each row of the matrix and do an inner product with the vector to get the elements of the resulting vector $v_o$ -- so row 1 of $M$ times $v_i$ gets you the first element in $v_o$, row 2 of $M$ gets you the second element of $v_o$, etc.
If $v_i$ is an eigenvector of $M$, then multiplying them will give you a $v_o$ that is effectively $v_i$ times a constant, where the constant is the eigenvalue corresponding to the eigenvector.
Since the whole output vector ($v_o$) is a scalar multiple of the input vector ($v_i$), the eigenvalues cannot be directly related to specific rows of $M$; each element of $v_o$ only "interacted" with a single row of $M$ -- a different row for each element -- during the multiplication, but every element of $v_o$ was scaled by the same (eigen)value from the original elements in $v_i$.
As for the columns, every column of $M$ affects every element of the output vector, so how do we separate out the distinct eigenvalues?
The whole of $M$ can be seen as a transformation. For certain vectors, the transformation only ends up scaling that vector -- this is just a result of applying a particular transformation to a particular vector.
To summarize the comments, none of these statements is true. An easy sufficient condition for this limit to exist is that $A$ is diagonalizable and all of its eigenvalues lie in the half-open interval $(-1, 1]$, and a necessary condition is that the eigenvalues of $A$ are either equal to $1$ or strictly less than $1$ in absolute value. It's a bit tricky to say what happens if $A$ isn't diagonalizable.
People write all sorts of things in notes; they are just documents that someone decided to put up on the internet, not authoritative references. They can easily be very sloppy, especially if they're e.g. about math but not written by mathematicians (physicists, for example). There may also be missing hypotheses, you may have misread them, etc. All sorts of things.
Best Answer
$A$ has the same eigenvalues as its transpose, that I will denote $B$. For $B$, the hypothesis means that $\sum_{j=1}^n|a_{ij}|\leq 1$ for all $i$. If $x\neq 0$ is an eigenvector for $\lambda$, and $i$ is such that $x_i=\lVert x\rVert_{\infty}$, then we have $\sum_{j=1}^na_{ij}x_j=\lambda x_i$ hence $\sum_{j\neq i}a_{ij}x_j=(\lambda-a_{ii})x_i$ and $$\lVert x\rVert_{\infty}|\lambda-a_{ii}|\leq \sum_{j\neq i}|a_{ij}|\cdot |x_j|\leq \sum_{j\neq i}|a_{ij}|\cdot\lVert x\rVert_{\infty}\leq (1-|a_{ii}|)\cdot\lVert x\rVert_{\infty}.$$ As $x\neq 0$, we get $|\lambda-a_{ii}|\leq 1-|a_{ii}|$ hence $|\lambda|\leq 1$.
(we proof the Gershgorin circle theorem)