For a Markov chain with $N<\infty$ states, the set $I$ of invariant probability vectors is a non-empty simplex in ${\mathbb R}^N$ whose extreme points correspond to the recurrent classes of the chain. Thus, the vector is unique iff there is exactly one recurrent class; the transient states (if any) play absolutely no role (as in Jens's example). The set $I$ is a point, line segment, triangle, etc. exactly when there are one, two, three, etc. recurrent classes.
If the invariant vector $\pi$ is unique, then there is only one recurrent class and the chain will eventually end up there. The vector $\pi$ necessarily puts zero mass on all transient states. Letting $\phi_n$ be the law of $X_n$, as you say, we have $\phi_n\to \pi$ only if the recurrent class is aperiodic. However, in general we have Cesàro convergence:
$${1\over n}\sum_{j=1}^n \phi_j\to\pi.$$
An infinite state space Markov chain need not have any recurrent states, and may have the zero measure as the only invariant measure, finite or infinite. Consider the chain on the positive integers which jumps to the right at every time step.
Generally, a Markov chain with countable state space has invariant probabilities iff there are positive recurrent classes. If so, every invariant probability vector $\nu$ is a convex combination of
the unique invariant vector $m_j$ corresponding to each positive recurrent class $j\in J$, i.e.,
$$\nu=\sum_{j\in J} c_j m_j,\qquad c_j\geq 0,\quad \sum_{j\in J}c_j=1.$$
This result is Corollary 3.23 in Wolfgang Woess's Denumerable Markov Chains.
Finding a full eigendecomposition costs $O(n^3)$ operations and $O(n^2)$ memory space, where $n$ is the side of the matrix. The size $n$ of the Markov matrix is the number of indexed web pages, which is of the order of $10^9$, so a full factorization would be prohibitive.
Best Answer
Consider the iterations $$v_{i+1}^T=v_i^TA$$
If $v_i$ indicates the probability distribution of states at time $i$, then $v_{i+1}$ indicate the distribution of states at time $i+1$.
In fact, convergence is not guaraneteed. For example, consider $\begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix}$, suppose we let $v_0=\begin{bmatrix} 0.3 \\ 0.7\end{bmatrix}$, we can observe that the sequence of vectors will just iterate between $\begin{bmatrix} 0.3 \\ 0.7\end{bmatrix}$ and $\begin{bmatrix} 0.7 \\ 0.3\end{bmatrix}$ and hence it doesn't conveges. This is true for almost all the vectors besides $v_0 = \begin{bmatrix} 0.5 \\ 0.5\end{bmatrix}$.
Hence, the initial distribution does decide whether it converges and if so to which equilibrium distribution. There are also cases where equilibrium solution is unique and it is independent of initial distribution.
Short answer when things are nice:
Let's consider the special case where $\lim_{n \rightarrow \infty} A^n =B$ exists.
Notice that $B$ has the property that $BA=B$.
Let's consider the vector $y^T=v_0^TB$.
Then, $y^TA = v_0^TBA=v_0^TB=y^T$
Hence $y$ is an eigenvector. Each row of $B$ is a distribution and $v_0$ decides the weight for combinations of such distributions. In the events that each row of $B$ is identical, then it is independent of the choice of $v_0$. Ergodic unichain guarantees that each row of $B$ is identical.