Obviously if $x=0$ then the chain marches off to infinity, so we'll suppose that $x>0$. Then all the states have the same recurrence status because all the states communicate.
Let's suppose first that the chain is positive recurrent. To find the stationary distribution $(\pi_i)_{i \in \mathbb{N}}$, note that the probability we are in state $i \not = 1$ at time $n$ is just
$$\left( \frac{i-1}{i} \right)^x \mathbb{P}(\text{we are in state $i-1$ at time $n-1$})$$
That is, $\pi_i = \left(\frac{i-1}{i} \right)^x \pi_{i-1}$, so $\pi_i = \left( \frac{1}{i} \right)^x \pi_1$ for $i > 1$.
Since $\pi$ is a distribution, we have $$\pi_1 \left(1+\sum_{i=2}^{\infty} \left( \frac{1}{i} \right) ^x \right) = 1$$
This sum never converges for any $x \leq 1$, so in fact there can't be a stationary distribution. Therefore the chain can't be positive recurrent if $x \leq 1$. If, on the other hand, $x > 1$, then the sum does converge, so if $x>1$ and the chain is positive recurrent, then we've found the stationary distribution.
Is the chain transient or null-recurrent? We examine the state $1$. The probability that "after some time we stop ever hitting $1$" is just $\left(\frac{1}{2} \right)^x \times \left( \frac{2}{3} \right)^x \times \dots$, which of course converges to 0. Therefore the chain is recurrent no matter what $x > 0$ is.
Hence if $x=0$, the chain is transient. If $0 < x \leq 1$, the chain is null-recurrent. If $x > 1$ then it's probably positive-recurrent. I'm still working on proving that the chain actually does tend to the stationary distribution above; with that done, the result is proved.
For a finite Markov chain, the nicest proof that I know of goes through under the weaker assumption that for any two states, there is a common third state they can both reach with positive probability, after some number of steps. And that it's aperiodic.
(This proof involves starting one Markov chain from a stationary distribution, another one from an arbitrary state, and coupling them so that if they're ever in the same state, they continue to be in the same state. Then the probability that the two Markov chains get stuck together approaches $1$ with time, so the second one converges to the stationary distribution.)
It's easy to see that both conditions are also necessary, so that answers the question for finite Markov chains.
For infinite Markov chains, this condition needs to be stronger for the same proof to work: that there exist $N, \epsilon$ such that for any two states, there is a common third state that they can both reach with probability $\epsilon$ after $N$ steps. We get this for free in the finite case, but it's not a good hypothesis to take in the infinite case: it's false for many perfectly well-behaved Markov chains.
Best Answer
The stationary distribution is given by $\pi_k:=qp^{k-1}$ (in the non degenerate case, that is $pq\neq 0$). In this case, since the chain is irreducible, the states are all positive recurrent.