Well, if we limit our discussion to the weighted undirected graphs, there is Cheeger inequality which provides bounds for the 2nd eigenvalue in terms of other numerical characteristics of the graph: see e.g. here.
The result may still hold for weighted graphs directed graphs that are symmetrizable: namely, there exists a positive vector $\mu$ such that $A(x,y)\mu(x) = A(y,x)\mu(y)$, see in this very accessible lecture notes. Without this reversible/symmetrizable structure you don't have Green's theorem, so results are pretty weak.
As Tim has mentioned, though, there are no non-trivial bounds in general since the 2nd eigenvalue may be as close to the 1st one as one can imagine. So anyways, it all depends on a particular case. I don't know of any interesting estimates for the non-reversible case.
Let us denote by $r$ the Perron-root of the positive matrix $A \in \mathbb{R}^{n \times n}$. Then by the Collatz-Wielandt formula we have:
$$\max_{x \in S}\min_{\substack{i=1, \ldots,n}} \frac{(Ax)_i}{x_i} = r = \min_{x \in S}\max_{\substack{i=1, \ldots,n}} \frac{(Ax)_i}{x_i}, $$
where $S := \{x \in \mathbb{R}^n\setminus\{0\}: x_i > 0, \forall i=1,\ldots,n\}$. Now it is clear that $A$ and $A^T$ have same eigenvalues, since $\det(M)=\det(M^T)$ and for every $\lambda \in \mathbb{R}$ we have $$\det(A-\lambda I)=\det((A-\lambda I)^T)= \det(A^T-\lambda I).$$
Furthermore $A$ strictly positive implies $A^T$ strictly positive, thus this formula also holds for $A^T$. It follows that we have
$$\max_{x \in S}\min_{\substack{i=1, \ldots,n}} \frac{(A^Tx)_i}{x_i} = r = \min_{x \in S}\max_{\substack{i=1, \ldots,n}} \frac{(A^Tx)_i}{x_i}.$$
This clearly implies that for every $y \in S$ we have
$$\min_{\substack{i=1, \ldots,n}} \frac{(A^Ty)_i}{y_i} \leq r \leq \max_{\substack{i=1, \ldots,n}} \frac{(A^Ty)_i}{y_i}.$$
Choose $y = (1,1,\ldots,1)$ to get your bounds. Note also that using the same trick on $A$ directly you will get the same upper/lower bound but with the columns instead of the rows.
For reference, I recommend (in increasing order of technicality/generality):
- "Matrix analysis" by Horn and Johnson
- "Nonnegative Matrices in the Mathematical Sciences" by Bermann and Plemmons
- "Nonlinear Perron-Frobenius theory" by Nussbaum and Lemmens
There are even more general versions discussed in recent literature, e.g. in this paper
Best Answer
Here's a really elementary proof (which is a slight modification of Fanfan's answer to a question of mine). As Calle shows, it is easy to see that the eigenvalue $1$ is obtained. Now, suppose $Ax = \lambda x$ for some $\lambda > 1$. Since the rows of $A$ are nonnegative and sum to $1$, each element of vector $Ax$ is a convex combination of the components of $x$, which can be no greater than $x_{max}$, the largest component of $x$. On the other hand, at least one element of $\lambda x$ is greater than $x_{max}$, which proves that $\lambda > 1$ is impossible.