For a Markov chain with $N<\infty$ states, the set $I$ of invariant probability vectors is a non-empty simplex in ${\mathbb R}^N$ whose extreme points correspond to the recurrent classes of the chain. Thus, the vector is unique iff there is exactly one recurrent class; the transient states (if any) play absolutely no role (as in Jens's example). The set $I$ is a point, line segment, triangle, etc. exactly when there are one, two, three, etc. recurrent classes.
If the invariant vector $\pi$ is unique, then there is only one recurrent class and the chain will eventually end up there. The vector $\pi$ necessarily puts zero mass on all transient states. Letting $\phi_n$ be the law of $X_n$, as you say, we have $\phi_n\to \pi$ only if the recurrent class is aperiodic. However, in general we have Cesàro convergence:
$${1\over n}\sum_{j=1}^n \phi_j\to\pi.$$
An infinite state space Markov chain need not have any recurrent states, and may have the zero measure as the only invariant measure, finite or infinite. Consider the chain on the positive integers which jumps to the right at every time step.
Generally, a Markov chain with countable state space has invariant probabilities iff there are positive recurrent classes. If so, every invariant probability vector $\nu$ is a convex combination of
the unique invariant vector $m_j$ corresponding to each positive recurrent class $j\in J$, i.e.,
$$\nu=\sum_{j\in J} c_j m_j,\qquad c_j\geq 0,\quad \sum_{j\in J}c_j=1.$$
This result is Corollary 3.23 in Wolfgang Woess's Denumerable Markov Chains.
Indeed this Markov chain is reducible, with two communicating classes, and the communicating class {1} is closed while the communicating class {2,3} is not. Every stationary distribution of a Markov chain is concentrated on the closed communicating classes, in the present case, only state 1 is in a closed communicating class. This proves without computation that the stationary distribution is unique and is the Dirac distribution on state 1.
Best Answer
As already mentioned, once you have shuffled the deck it is a deterministic graph. When two trajectories meet they will continue along the same path. Now, having simulated the game on a computer I am not so sure that I would recommend it to show off among friends.
Given 52 cards taking values from 2 to 10 (with each face card having the value 10), and if two players start independently at random among the first 10 cards the success rate for ending at the same card is about 62 %
If you eliminate the three royal cards and take ace to have the value 1 (so you have 40 cards taking values from 1 to 10) and if you pick as your initial choice the 1st card you increase the success rate to about 76 %
If you want better odds take two decks of cards from the start. Success rate becomes about 85 % with two decks of 52 cards, and 95 % in the case of two decks with 40 cards (with values 1 to 10).
An explanation of how to provide a rough estimate, using a kind of Markov approximation, for the convergence of two trajectories is given in the very nice article (link provided by @awkward, thanks!): The Kruskal Count. In that article they let the royal cards count as five and ace as 1, which yields a success rate of about 85 %.