OK, I think I have a full answer at this point, so let me post it.
Step 1. (algebra).
If $P$ is an $n\times n$ stochastic matrix and $\lambda$ is an eigenvalue of $P$ with $|\lambda|=1$, then $\lambda^k=1$ for some $k\le n$ and $1,\lambda,\lambda^2,\dots\,\lambda^{k-1}$ are eigenvalues of $P$.
Indeed, let $x$ be an eigenvector. WLOG, $\lambda\ne 1$. Let $S=\{j:|x_j|=\max_k|x_k|\}$. Then if $i\in S$ and $p_{ij}>0$, then $j\in S$. Now, if $i\in S$ and $p_{ij}>0$, then $x_j=\lambda x_i$. Thus, if we have some entry in $x$, we also have $\lambda^k$ times this entry for every $k$, but the number of different entries is at most $n$. Moreover, we can assume that one of the entries is $1$ and split the indices in $S$ into groups $S_m$ by the rule $j\in S_m$ iff $x_j$ is in the half-open counterclockwise arc from $\lambda^m$ to $\lambda^{m+1}$ so that $i\in S_m$, $p_{ij}>0$ imply $j\in S_{m+1}$. From here we immediately see that $P-\lambda^qI$ is not invertible for every $q$ (the $S$-block annihilates the vector $y_j=\lambda^{qm}$ for $j\in S_m$ and the full determinant has the determinant of the $S$-block as a factor. Thus, all powers of $\lambda$ are eigenvalues.
Step 2. (compactness argument).
Consider a convergent sequence $P_k$ of $n\times n$ stochastic matrices with the limit $P$. Assume that $P_k$ have eigenvalues $\lambda_k$ and $\lambda_k$ are not contained in any Stolz angle. Then we may assume that $\lambda_k\to\lambda$, $|\lambda|=1$. Clearly, $\lambda$ is an eigenvalue of $P$. If $\lambda\ne 1$, then $P$ has several eigenvalues summing to $0$ (powers of $\lambda$), so $\operatorname{Tr} P\le n-2$, which makes it not a limit point of your set. But If $\lambda=1$, it is even worse, because, if $\lambda_k$ approach $1$ tangentially, then $\lambda_k^{m_k}$ can tend to any point on the unit circle but they are also eigenvalues of $n\times n$ stochastic matrices (powers of $P_k$) and so are their limits. Thus, we have some fixed (but depending on $n$) Stolz angle, containing all the eigenvalues of your matrices.
Step 3. (harmonic analysis)
Let $f(m)=\sum_{k=1}^n c_k\lambda_k^m$ for $m\ge 0$ and $0$ for $m<0$ where $\lambda_k$ lie in some fixed Stolz angle $A$. Then
$$
Vf=\sum_{m\in\mathbb Z}|f(m+1)-f(m)|\le C(A,n)\max_m |f(m)|
$$
Proof:
We begin with a
Complex analysis lemma
Let $F(z)=\sum_{k=1}^n c_k e^{\mu_k}z$ where $\mu_k\in\mathbb C$, $|\mu_k|\le 1$. Then $F$ has at most $C(n)$ zeroes in the unit disk.
Proof: Let $m$ be the maximum of $|f|$ over the unit disk. Then the first $n$ derivatives at the origin are bounded by $C(n)m$. But $\Phi(t)= F(zt)$ ($|z|=1$) satisfies an $n$-th order differential equation $\Phi^{(n)}=\sum_{k=0}^{n-1}b_k\Phi^{(k)}$ with coefficients $b_k$ obtained by expansion of the polynomial $\prod_{k=1}^n (x-z\mu_k)$, which are bounded by $2^n$, say. The standard ODE theory implies that $\Phi$ is bounded by $C'(n)m$ on $[-2,2]$, so the ratio of the maximum of $F$ over the disk of radius $2$ and over the unit disk is bounded, which is enough to control the number of zeroes in the unit disk (each Blaschke factor moves it up fixed number of times). Rescaling and covering, we conclude that if $|\mu_k|<\mu$, then there may be only $C(n,K)$ zeroes in the disk of radius $K/mu$.
Now,
Induction
If $n=1$, the claim is obvious: the maximum is just $c_1$ and $|\lambda_1^k-\lambda_1^{k+1}|\le C(A)(|\lambda_1|^k-|\lambda_1|^{k+1})$
Let $n>1$. Write $\lambda_k=e^{-\mu_k}$ with $|\mu_k|\le C(A)\operatorname{Re\mu_k}$. Let $\mu=\max|\mu_k|=\mu_n$. Note that $f$ is the trace of $F(z)=\sum_{k=1}^n c_ke^{-\mu_k z}$ on integers. The derivative of the real or imaginary part of $F(t)$ can have only $C(2n,K)$ zeroes on $[0,K/\mu]$, so the real and the imaginary parts have a bounded number of intervals of monotonicity there whence $f$ has variation dominated by its maximum on $[0,K/\mu]$. Now, choose $K=K(A)$ so that $\gamma=\lambda_n^N$ is less than $1/2$ in absolute value where $N\approx K/\mu$. The function $g(m)=f(m+N)-\gamma f(m)$ for $m\ge 0$ and $0$ for $m<0$ is bounded by $2\max|f|$ and has one term less. Thus, by the induction assumption, $Vg\le C(n)\max|f|$.
To recover $f$ from $g$, note that $f(m)-\gamma f(m-N)=G(m)$ where $G(m)=g(m-N)$ for $m\ge N$ and $G(m)=f(m)$ for $m\le N$. Note that $VG$ is still under control because we have bounded both $Vg$ and the part of $Vf$ corresponding to the interval $[0,N]$. Now it remains to iterate this recurrence to get $f(m)=G(m)+\gamma G(m-N)+\gamma^2 G(m-2N)+\dots$ and to use the shift invariance of and the triangle inequality for the total variation.
Step 4. (the end)
Each entry of the matrix $P^k$ is of this form (assuming that the eigenvalues are distinct, which is a dense case). Thus, the total variation of each entry is bounded by some $C(n)$ depending on $n$ only. This is equivalent to $\sum_k\|P^k-P^{k+1}\|\le C(n)$ but the sequence of norms ($\ell^\infty$) is non-increasing, so it is $O_n(k^{-1})$.
Feel free to ask questions :). I suspect this all is written in some obscure textbooks but to do the literature search now is beyond my abilities.
$\newcommand{\trace}{\operatorname{trace}}$
The result below mentions a reasonably improved inequality.
Let $m = \frac{\trace(A)}{n}$, and $s^2= \frac{\trace(A^2)}{n}-m^2$. Then, Wolkowicz and Styan (Linear Algebra and its Applications, 29:471-508, 1980), show that
\begin{equation*}
\lambda_1 \ge \frac{\det(A)}{(m+s/\sqrt{n-1})^{n-1}}
\end{equation*}
Remark: As per the notation in the OP, $\lambda_1$ is the smallest eigenvalue---usually the literature uses $\lambda_1$ to be largest.
Thus, we obtain the upper bound
\begin{equation*}
\lambda_2\lambda_3\cdots\lambda_n \le \left(m+ \frac{s}{\sqrt{n-1}}\right)^{n-1}.
\end{equation*}
This bound is tight. Consider for example,
If $A= \text{Diag}(1,2,2,\ldots,2)$, then the lhs is $2^{n-1}$, $m=2-1/n$, and $s^2
= 1/n-1/n^2$, so that $s/\sqrt{n-1} = 1/n$. Thus, the bound on the rhs is tight.
With $M := \max_{i,j}|a_{ij}|$, we see that $m \le M$ and $s^2 \le nM^2 - m^2$, which leads to an upper bound in terms of $M$ as desired
\begin{equation*}
\lambda_2\lambda_3\cdots\lambda_n \le \left(M+ \sqrt{\frac{nM^2-m^2}{n-1}}\right)^{n-1} < \left(M + M\sqrt{\frac{n}{n-1}} \right)^{n-1},
\end{equation*}
which is better than the bound mentioned in the post (though we lost a bit by deleting $m^2$).
Best Answer
The answer is due to Boyd, Diaconis, Sun & Xiao. If $A$ is a symmetric and bi-stochastic, then $\mu:=\max(\lambda_2,-\lambda_n)$ satisfies $$\mu\ge\cos\frac\pi{n}.$$ In addition, there exists such a matrix for which the equality holds. See Exercise 164 of my list http:\www.umpa.ens-lyon.fr/~serre/DPF/exobis.pdf .
P.S. Because you assume that the eigenvalues are real, I presume that you have in mind that the matrix is symmetric.