Yes, Markov processes with infinitely many states are indeed considered. Random walks are a common example. The term "Markov chain" is often reserved for the case of a discrete state space. If the state space is finite, it's a "finite Markov chain". See e.g. http://www.statslab.cam.ac.uk/~james/Markov/
Obviously if $x=0$ then the chain marches off to infinity, so we'll suppose that $x>0$. Then all the states have the same recurrence status because all the states communicate.
Let's suppose first that the chain is positive recurrent. To find the stationary distribution $(\pi_i)_{i \in \mathbb{N}}$, note that the probability we are in state $i \not = 1$ at time $n$ is just
$$\left( \frac{i-1}{i} \right)^x \mathbb{P}(\text{we are in state $i-1$ at time $n-1$})$$
That is, $\pi_i = \left(\frac{i-1}{i} \right)^x \pi_{i-1}$, so $\pi_i = \left( \frac{1}{i} \right)^x \pi_1$ for $i > 1$.
Since $\pi$ is a distribution, we have $$\pi_1 \left(1+\sum_{i=2}^{\infty} \left( \frac{1}{i} \right) ^x \right) = 1$$
This sum never converges for any $x \leq 1$, so in fact there can't be a stationary distribution. Therefore the chain can't be positive recurrent if $x \leq 1$. If, on the other hand, $x > 1$, then the sum does converge, so if $x>1$ and the chain is positive recurrent, then we've found the stationary distribution.
Is the chain transient or null-recurrent? We examine the state $1$. The probability that "after some time we stop ever hitting $1$" is just $\left(\frac{1}{2} \right)^x \times \left( \frac{2}{3} \right)^x \times \dots$, which of course converges to 0. Therefore the chain is recurrent no matter what $x > 0$ is.
Hence if $x=0$, the chain is transient. If $0 < x \leq 1$, the chain is null-recurrent. If $x > 1$ then it's probably positive-recurrent. I'm still working on proving that the chain actually does tend to the stationary distribution above; with that done, the result is proved.
Best Answer
There are different types of Markov chains, which you will be able to recognize by their transition matrix (and/or by their transition graph). These transition matrices have the property that their columns add up to one and all entries are between 0 and 1. I will discuss here two of the most common types. (Here I will use the convention that the transition matrix has columns adding to one and multiplying by distribution vectors on the right).
A regular Markov Chain is one in which for our transition matrix $T$, there exists some $n$ such that $T^{n}$ has all nonzero positive entries. Regular Markov chains have the very nice property that as $n\to\infty$ you will have some stable distribution, $y$, such that $T y = y$, and furthermore that $\lim_{n\to\infty}T^n x = y$ regardless what $x$ is.
To solve, noting that $Ty = y$ and that $y$ is a distribution, meaning that the sum of the entries of $y$ will equal one, you are given the system of equations:
$\begin{cases} y_1 + y_2 + \dots + y_k = 1\\ T_{1,1}y_1 + T_{1,2}y_2 + \dots + T_{1,k}y_k = y_1 \\ \vdots \\ T_{k,1}y_1 + T_{k,2}y_2 + \dots + T_{k,k}y_k = y_k\end{cases}$
This is a system of $k+1$ equations in $k$ unknowns, and as it so happens, one of the lines will be redundant (the rows are not linearly independent due to the stochastic property). Using your favorite method, solve the system to obtain the stable distribution $y$.
The long term matrix, $\lim_{n\to\infty}T^n$, having the property that any initial distribution (in particular distributions with only one nonzero state) will eventually reach the stable distribution, it follows that it will appear as every column being a copy of $y$.
The other type of Markov Chain you are likely to see is an absorbing markov chain. An absorbing markov chain is one that has an absorbing state, a state that once entered cannot be left. I.e. when looking at the transition matrix, there will be a one along the main diagonal. The rows/columns with a one are the absorbing states. The final distribution in these cases will depend on what the initial distribution is, however there will be a longterm trend for the transition matrix itself.
Reorganizing the matrix into its standard form by swapping columns and rows (making sure that the order states appear in the columns is the same as the order they appear for the rows), you will have a matrix in the form
$$T=\left[\begin{array}{c|c} I & S\\ \hline 0 & R\end{array}\right]$$
and the long term matrix will be:
$$\lim_{n\to\infty} T^n = \left[\begin{array}{c|c} I & S(I-R)^{-1}\\ \hline 0 & 0\end{array}\right]$$
See examples of absorbing markov chains in action in some of my previous answers Chance of winning simple dice game and Sniper probability question
For an example of a regular markov chain, suppose we have a fleet of taxicabs and three major hubs in the city for them to go. We have midtown, downtown, and the airport. Each taxi will transition from one location to another or stay at the same spot once every 15 minutes.
Suppose our transition diagram is given as:
Our transition matrix is then:
$$\begin{bmatrix} .4 & 0 & .2\\ .3 & 0 & .7\\ .3 & 1 & .1\end{bmatrix}$$
This doesn't have nothing but nonzero numbers, so we might be hesitant to think it falls under the first case, but tedious arithmetic aside, you will see that $T^3$ has all strictly positive entries. (This is seen easier from the transition diagram from a graph theoretic sense by noting that there exists a length 3 path from any state to any other state).
Thus, we solve the system:
$$\begin{cases} y_1+y_2+y_3 = 1\\ .4y_1 + 0 + .2y_3 = y_1\\ .3y_1 + 0 + .7y_3 = y_2\\ .3y_1 + 1y_2 + .1y_3 = y_3\end{cases}$$
Rearranging and setting up a matrix to rowreduce, we arrive at:
$$\left[\begin{array}{ccc|c} 1 & 1 & 1 & 1\\ -.6 & 0 & .2 & 0\\ .3 & -1 & .7 & 0\\ .3 & 1 & -.9 & 0\end{array}\right]$$
which row reduces to
$$\left[\begin{array}{ccc|c} 1 & 0 & 0 & .15625\\ 0 & 1 & 0 & .375\\ 0 & 0 & 1 & .46875\\ 0 & 0 & 0 & 0\end{array}\right]$$
Indeed, checking $Ty$ it works out correctly.
Thus, after a "long time", about an eighth of the taxis will be at midtown, about one third of the taxis will be at the airport, and about half will be at downtown.
Our longterm matrix is thus three columns identical to $y$.