Below follows an alternative analysis which does not rely on Gershgorin's theorem.
We begin the analysis with a sequence of elementary steps designed to reduce the number of variables and simplify the necessary calculations.
In general, we have $$\|A\|_\infty = \|A^T\|_1.$$ If $A$ is symmetric and non-singular, then $A^{-1}$ is also symmetric. It follows that $$ \kappa_\infty(A) = \kappa_1(A).$$ Since our matrix
$$A = \begin{bmatrix} a & b \\ b & c \end{bmatrix}$$ is symmetric, we are therefore free to concentrate on, say, the 2-norm and the infinity norm.
We will distinguish between the case of $b=0$ and $b \not = 0$. If $b=0$, then
$$ \|A\|_2 = \|A\|_\infty = \max \{|a|,|c|\}, \|A^{-1}\|_2 = \|A^{-1}\|_\infty = \max \{|a|^{-1},|c|^{-1}\}.$$
It follows, that
$$ \kappa_2(A) = \kappa_\infty(A) = \max\left\{ \frac{|a|}{|c|}, \frac{|c|}{|a|} \right\}$$
In the case of $b = 0$, it is clear that $A$ is well-conditioned precisely when $|a| \approx |c|$ and ill conditioned when $|a| \ll |c|$ or $|c| \ll |a|$.
In the case of $b \not = 0$ we can without loss of generality assume that $b = 1$. If $b \not = 1$, then we simply scale the matrix with $b^{-1}$. The condition numbers are invariant under this scaling because $(b^{-1}A)^{-1} = b A^{-1}$. We can therefore concentrate on the case of $b=1$ where $$A = \begin{bmatrix} a & 1 \\ 1 & c \end{bmatrix}.$$ This matrix is non-singular if and only if $ac \not =1$. In this case, we have
$$A^{-1} = \frac{1}{1 - ac} \begin{bmatrix} c & -1 \\ -1 & a \end{bmatrix}.$$
It follows, that
$$ \|A\|_\infty = \max\{1 + |a|,1+|c|\}, \quad \|A^{-1}\|_\infty = \frac{1}{|1 - ac|}\max\{1 + |a|,1+|c|\},$$
which implies
$$ \kappa_\infty(A) = \frac{1}{|1 - ac|}\max\{(1 + |a|)^2,(1+|c|)^2\}$$
A contour plot of the right hand side could now be obtained. A detailed understanding can be developed by covering the punctured plane $\mathbb{R}^2 - \{(0,0)\}$ with hyperbolas, i.e., curves of the form $$ac = \gamma$$
where $\gamma \not = 0$. On the curve corresponding to a $\gamma \not = 1$ we have
$$ \kappa_\infty(A) \ge \frac{1}{|1 - \gamma|} \left( 1 + \sqrt{|\gamma|}\right)^2$$
with equality achieved when $|a|=|c| = \sqrt{|\gamma|}$. Moreover,
$$ \kappa_\infty(A) \rightarrow \infty, $$
as $$\max\{|a|,|c|\} \rightarrow \infty, \quad ac=\gamma.$$ This covers the analysis of the infinity norm.
Explicit calculation of the 2-norm condition number is tedious, but a short-cut is possible because of the equivalence of norms. In general, we have
$$ \|x\|_\infty \leq \| x\|_2 \leq \sqrt{n} \|x\|_\infty $$
This implies that
$$ \kappa_2(A) \leq n \kappa_\infty(A) \leq n^2 \kappa_2(A)$$
Why? We have
$$ \|Ax\|_2 \leq \sqrt{n} \|Ax\|_\infty \leq \sqrt{n} \|A\|_\infty \|x\|_\infty \leq \sqrt{n} \|A\|_\infty \|x\|_2 $$
This implies that
$$ \|A\|_2 \leq \sqrt{n} \|A\|_\infty$$
Similarly, we have
$$ \|Ax\|_\infty \leq \|Ax\|_2 \leq \|A\|_2 \|x\|_2 \leq \sqrt{n} \|A\|_2 \|x\|_\infty.$$
This implies that
$$ \|A\|_\infty \leq \sqrt{n} \|A\|_2.$$
In our case $n=2$, so
$$ \kappa_2(A) \leq 2 \kappa_\infty(A) \leq 4 \kappa_2(A)$$
or equivalently
$$ \frac{1}{2} \kappa_\infty(A) \leq \kappa_2(A) \leq 2 \kappa_\infty(A). $$
In other words, when it comes to the conditioning of the matrix $A$ there is little to be learned from the 2-norm which cannot be discerned from the infinity norm.
This (stochastic) matrix $A$ has two eigen values $\lambda_1=1$ and $\lambda_2=0.7$ so it is diagonalizable, i.e. $A=P\begin{bmatrix}1&0\\0&0.7\end{bmatrix}P^{-1}=PDP^{-1}$, where $P=\begin{bmatrix}1&-1\\2&1\end{bmatrix}$
To get the equilibrium vector. We want to compute $v$ such that $\lim_{n \to \infty}A^nv=v$.
Now $v=c_1v_1+c_2v_2$, where $v_1,v_2$ are eigenvectors corresponding to the eigenvalues $1$ and $0.7$.
Thus
$$A^nv=c_1A^nv_1+c_2A^nv_2 =c_1(1)^nv_1+c_2(0.7)^nv_2$$
$$\lim_{n \to \infty}A^nv=c_1v_1$$
So the multiple of first eigenvector (for eigenvalue $1$) is the equilibrium vector.
Best Answer
I dont know if you have to compute the conditioning $\kappa$ as the ratio of the eigenvalues $\max(\sigma)/\min(\sigma)$ under norm 2, which would not require to compute the actual inverse.
From the R manual: "The condition number takes on values between 1 and infinity, inclusive, and can be viewed as a factor by which errors in solving linear systems with this matrix as coefficient matrix could be magnified."
The condition number must be specified by the required resolution of your algorithm.
Double Precision Representation
If you are having computations under IEEE 754 64-bit double precision representation, i.e. 64 bits per value, such as MATLAB, you will have:
From here you can build some VERY rough estimates, which are only referencial about how deep the figures can go.
Significative Order
The smallest order is $10^{-308}$. If your condition number / eigenvalue ratio $\kappa$ is $10^{-6}$, you could only make 308/6=51 products of the power of A approx., before the smallest or the biggest eigenvalue got truncated.
For example, if $v=10^{6}$, $v^{51}=10^{306}$, and $v^{52}=$
Inf
.From here, the allowable conditioning would be given as, with $n$ the number of expected iterations using the compromised values: $$\kappa=10^{-308/n}$$
Significative Digits
By other side, the representation allows 52 bits i.e. $10^{-15}$ significative digits on the fractions. Hence if you want to keep a resolution in significative digits of $10^{-6}$, you could only make 15-6=9 products approximatedly of the power of A before the smallest or the biggest eigenvalue got truncated.
For example:
From here, the allowable conditioning would be given as, with $n$ the number of expected iterations using the compromised values: $$\kappa=10^{15-n}$$
Hence, it must be observed from the problem factors which is the proper conditioning figure your calculation should retain, based on the usage of the data and the purpose of the algorithms.
https://stat.ethz.ch/R-manual/R-devel/library/Matrix/html/rcond.html https://www.mathworks.com/help/matlab/ref/cond.html https://en.wikipedia.org/wiki/Condition_number https://en.wikipedia.org/wiki/Double-precision_floating-point_format