OK, I think I have a full answer at this point, so let me post it.
Step 1. (algebra).
If $P$ is an $n\times n$ stochastic matrix and $\lambda$ is an eigenvalue of $P$ with $|\lambda|=1$, then $\lambda^k=1$ for some $k\le n$ and $1,\lambda,\lambda^2,\dots\,\lambda^{k-1}$ are eigenvalues of $P$.
Indeed, let $x$ be an eigenvector. WLOG, $\lambda\ne 1$. Let $S=\{j:|x_j|=\max_k|x_k|\}$. Then if $i\in S$ and $p_{ij}>0$, then $j\in S$. Now, if $i\in S$ and $p_{ij}>0$, then $x_j=\lambda x_i$. Thus, if we have some entry in $x$, we also have $\lambda^k$ times this entry for every $k$, but the number of different entries is at most $n$. Moreover, we can assume that one of the entries is $1$ and split the indices in $S$ into groups $S_m$ by the rule $j\in S_m$ iff $x_j$ is in the half-open counterclockwise arc from $\lambda^m$ to $\lambda^{m+1}$ so that $i\in S_m$, $p_{ij}>0$ imply $j\in S_{m+1}$. From here we immediately see that $P-\lambda^qI$ is not invertible for every $q$ (the $S$-block annihilates the vector $y_j=\lambda^{qm}$ for $j\in S_m$ and the full determinant has the determinant of the $S$-block as a factor. Thus, all powers of $\lambda$ are eigenvalues.
Step 2. (compactness argument).
Consider a convergent sequence $P_k$ of $n\times n$ stochastic matrices with the limit $P$. Assume that $P_k$ have eigenvalues $\lambda_k$ and $\lambda_k$ are not contained in any Stolz angle. Then we may assume that $\lambda_k\to\lambda$, $|\lambda|=1$. Clearly, $\lambda$ is an eigenvalue of $P$. If $\lambda\ne 1$, then $P$ has several eigenvalues summing to $0$ (powers of $\lambda$), so $\operatorname{Tr} P\le n-2$, which makes it not a limit point of your set. But If $\lambda=1$, it is even worse, because, if $\lambda_k$ approach $1$ tangentially, then $\lambda_k^{m_k}$ can tend to any point on the unit circle but they are also eigenvalues of $n\times n$ stochastic matrices (powers of $P_k$) and so are their limits. Thus, we have some fixed (but depending on $n$) Stolz angle, containing all the eigenvalues of your matrices.
Step 3. (harmonic analysis)
Let $f(m)=\sum_{k=1}^n c_k\lambda_k^m$ for $m\ge 0$ and $0$ for $m<0$ where $\lambda_k$ lie in some fixed Stolz angle $A$. Then
$$
Vf=\sum_{m\in\mathbb Z}|f(m+1)-f(m)|\le C(A,n)\max_m |f(m)|
$$
Proof:
We begin with a
Complex analysis lemma
Let $F(z)=\sum_{k=1}^n c_k e^{\mu_k}z$ where $\mu_k\in\mathbb C$, $|\mu_k|\le 1$. Then $F$ has at most $C(n)$ zeroes in the unit disk.
Proof: Let $m$ be the maximum of $|f|$ over the unit disk. Then the first $n$ derivatives at the origin are bounded by $C(n)m$. But $\Phi(t)= F(zt)$ ($|z|=1$) satisfies an $n$-th order differential equation $\Phi^{(n)}=\sum_{k=0}^{n-1}b_k\Phi^{(k)}$ with coefficients $b_k$ obtained by expansion of the polynomial $\prod_{k=1}^n (x-z\mu_k)$, which are bounded by $2^n$, say. The standard ODE theory implies that $\Phi$ is bounded by $C'(n)m$ on $[-2,2]$, so the ratio of the maximum of $F$ over the disk of radius $2$ and over the unit disk is bounded, which is enough to control the number of zeroes in the unit disk (each Blaschke factor moves it up fixed number of times). Rescaling and covering, we conclude that if $|\mu_k|<\mu$, then there may be only $C(n,K)$ zeroes in the disk of radius $K/mu$.
Now,
Induction
If $n=1$, the claim is obvious: the maximum is just $c_1$ and $|\lambda_1^k-\lambda_1^{k+1}|\le C(A)(|\lambda_1|^k-|\lambda_1|^{k+1})$
Let $n>1$. Write $\lambda_k=e^{-\mu_k}$ with $|\mu_k|\le C(A)\operatorname{Re\mu_k}$. Let $\mu=\max|\mu_k|=\mu_n$. Note that $f$ is the trace of $F(z)=\sum_{k=1}^n c_ke^{-\mu_k z}$ on integers. The derivative of the real or imaginary part of $F(t)$ can have only $C(2n,K)$ zeroes on $[0,K/\mu]$, so the real and the imaginary parts have a bounded number of intervals of monotonicity there whence $f$ has variation dominated by its maximum on $[0,K/\mu]$. Now, choose $K=K(A)$ so that $\gamma=\lambda_n^N$ is less than $1/2$ in absolute value where $N\approx K/\mu$. The function $g(m)=f(m+N)-\gamma f(m)$ for $m\ge 0$ and $0$ for $m<0$ is bounded by $2\max|f|$ and has one term less. Thus, by the induction assumption, $Vg\le C(n)\max|f|$.
To recover $f$ from $g$, note that $f(m)-\gamma f(m-N)=G(m)$ where $G(m)=g(m-N)$ for $m\ge N$ and $G(m)=f(m)$ for $m\le N$. Note that $VG$ is still under control because we have bounded both $Vg$ and the part of $Vf$ corresponding to the interval $[0,N]$. Now it remains to iterate this recurrence to get $f(m)=G(m)+\gamma G(m-N)+\gamma^2 G(m-2N)+\dots$ and to use the shift invariance of and the triangle inequality for the total variation.
Step 4. (the end)
Each entry of the matrix $P^k$ is of this form (assuming that the eigenvalues are distinct, which is a dense case). Thus, the total variation of each entry is bounded by some $C(n)$ depending on $n$ only. This is equivalent to $\sum_k\|P^k-P^{k+1}\|\le C(n)$ but the sequence of norms ($\ell^\infty$) is non-increasing, so it is $O_n(k^{-1})$.
Feel free to ask questions :). I suspect this all is written in some obscure textbooks but to do the literature search now is beyond my abilities.
Surprisingly (at least to me) the answer is no when $n\ge 3$. This was proved by Yves Benoist and me after I mentioned the problem in a talk at MSRI and Yves came up with a great idea.
It is enough to show that there is no uniform bound when $n=3$.
Here is an elementary argument that involves little computation. I don't pay attention to getting exact constants.
For small positive $\epsilon$ define
\begin{equation}
x_1 = e_1 \quad \quad
x_2 = e_1 + \epsilon e_2 \quad \quad
x_3 = e_1 + \epsilon e_3
\end{equation}`
For certain distinct non zero real $\lambda_i$, which will depend on $\epsilon$, let $T\in M_n(\mathbb{R})$ be defined by $Tx_i = \lambda_i x_i$. The dual basis to $(x_i)$ is defined by
\begin{equation}
f_1=e_1 - {1\over\epsilon} (e_2 + e_3) \quad \quad
f_2={1\over\epsilon} e_2 \quad \quad
f_3={1\over\epsilon} e_3
\end{equation}
So the transpose and adjoint of $T$ is defined by $T^*f_i = \lambda_i f_i$. If $S$ implements a similarity between $T$ and $T^*$, it is more or less clear that $\|S\|\cdot \|S^{-1}\| \to \infty$ as $\epsilon \to 0$ uniformly over all permissible choices of $\lambda_i$. (For me the easy way to see this is to semi-normalize the $f_i$ as
\begin{equation}
\tilde{f}_1=\epsilon e_1 - (e_2 + e_3) \quad \quad
\tilde{f}_2= e_2 \quad \quad
\tilde{f}_3= e_3
\end{equation}
The similarity $S$ must be given by $Sx_i = a_i \tilde{f}_i$ for some non zero $a_i$ since the $\lambda_i$ are distinct, and if $\|S\| \vee \|S^{-1}\| \le (1/3) C$, then all $|a_i|$ and all $1/|a_i|$ are less than $C$. But $\|x_2-x_3\|=\sqrt{2}\epsilon$ and
$\|a_2 \tilde{f}_2 - a_3 \tilde{f}_3 \| =\sqrt{|a_2|^2+|a_3|^2} > \sqrt{2} /C$, which forces
$\|S\| >1/(C\epsilon)$.)
Next a trivial but (it seems) important point. For fixed $\epsilon$, you can choose the $\lambda_i$ close enough to one so that $T$ is as close to the identity as you want. So we can choose $\lambda_i$ so that $\|T\|=1$ and $\|T^{-1}\|< 1+\epsilon$; denote such a $T$ by $T_\epsilon$.
Write $T_\epsilon = H_\epsilon K_\epsilon$ with $H_\epsilon$, $K_\epsilon$ (complex) Hermitian and $\|H_\epsilon\|=1$. We want to see that $\|K_\epsilon\| \to \infty$ as $\epsilon \to 0$. Notice that $H_\epsilon$ and $K_\epsilon$ are non singular since no $\lambda_i$ is zero. So we have
$H_\epsilon^{-1}T_\epsilon H_\epsilon = K_\epsilon H_\epsilon = T_\epsilon^*$ and hence, by the first part of the proof, $\|H_\epsilon^{-1}\| \to \infty$ as $\epsilon \to 0$. But $H_\epsilon^{-1} = K_\epsilon T_\epsilon^{-1}$, so
$$\|H_\epsilon^{-1}\| \le \|K_\epsilon\| \|T_\epsilon^{-1}\| \le (1+\epsilon) \|K_\epsilon\|.
$$
Best Answer
The key word under which you will find this result in modern books is "Schur complement". Here is a self-contained proof. Assume $I$ and $J$ are $(1,2,\dots,k)$ for some $k$ without loss of generality (you may reorder rows/columns). Let the matrix be $$ M=\begin{bmatrix}A & B\\\\ C & D\end{bmatrix}, $$ where the blocks $A$ and $D$ are square. Assume for now that $A$ is invertible --- you may treat the general case with a continuity argument. Let $S=D-CA^{-1}B$ be the so-called Schur complement of $A$ in $M$.
You may verify the following identity ("magic wand Schur complement formula") $$ \begin{bmatrix}A & B\\\\ C & D\end{bmatrix} = \begin{bmatrix}I & 0\\\\ CA^{-1} & I\end{bmatrix} \begin{bmatrix}A & 0\\\\ 0 & S\end{bmatrix} \begin{bmatrix}I & A^{-1}B\\\\ 0 & I\end{bmatrix}. \tag{1} $$ By taking determinants, $$\det M=\det A \det S. \tag{2}$$ Moreover, if you invert term-by-term the above formula you can see that the (2,2) block of $M^{-1}$ is $S^{-1}$. So your thesis is now (2).
Note that the "magic formula" (1) can be derived via block Gaussian elimination and is much less magic than it looks at first sight.