Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says,
If $X$ is a Harris ergodic Markov chain with stationary distribution
$\pi$, then a CLT holds for $f$ if $X$ is uniformly ergodic and
$E[f^2] < \infty$.
For finite state spaces, all irreducible and aperiodic Markov chains are uniformly ergodic. The proof for this involves some considerable background in Markov chain theory. A good reference would be Page 32, at the bottom of Theorem 18 here.
Hence, the Markov chain CLT would hold for any function $f$ that has a finite second moment. The form the CLT takes is described as follows.
Let $\bar{f}_n$ be the time averaged estimator of $E_{\pi}[f]$, then as Alex R. points out, as $n \to \infty$,
$$\bar{f}_n = \frac{1}{n} \sum_{i=1}^n f(X_i) \overset{\text{a.s.}}{\to} E_\pi[f].$$
The Markov chain CLT is
$$\sqrt{n} (\bar{f}_n - E_\pi[f]) \overset{d}{\to} N(0, \sigma^2), $$
where
$$\sigma^2 = \underbrace{\operatorname{Var}_\pi(f(X_1))}_\text{Expected term} + \underbrace{2 \sum_{k=1}^\infty \operatorname{Cov}_\pi(f(X_1), f(X_{1+k}))}_\text{Term due to Markov chain}. $$
A derivation for the $\sigma^2$ term can be found on Page 8 and Page 9 of Charles Geyer's MCMC notes here
I think you are somehow mixing independence and conditional independence.
The idea is not that $X_{n+1}$ is independent from $X_0 ... X_{n-1}$, but that it is conditionally independent given $X_n$.
This simply means that, once you know the value of $X_n$, the previous values $X_0...X_{n-1}$ are irrelevant for the distribution of $X_{n+1}$.
Taking your example, let us assume that $X_n = 5$. Now, when we want to predict the value of $X_{n+1}$, knowing the values of $X_0...X_{n-1},Z_0...Z_{n-1}$ is completely irrelevant. Nothing changes in your prediction if $X_{n-1}=4$ or $X_{n-1}=5$, once you know that $X_n = 5$.
Now:
$$\mathbb{P}(X_{n+1}=x_{n+1}|X_0=x_0,...,X_n=x_n)$$
$$\mathbb{P}( \sum_0^{n+1}Z_i =x_{n+1}|X_0=x_0,...,X_n=x_n)$$
$$\mathbb{P}( X_n + Z_{n+1} =x_{n+1}|X_0=x_0,...,X_n=x_n)$$
$$\mathbb{P}( Z_{n+1} =x_{n+1}-x_n|X_0=x_0,...,X_n=x_n)$$
$$\mathbb{P}( Z_{n+1} =x_{n+1}-x_n|X_n=x_n)$$
$$\mathbb{P}( X_{n+1} =x_{n+1}|X_n=x_n)$$
We were able to remove the conditioning on all $X_i$ for i smaller than $n$ because our probability is only a function of $Z_{n+1}$ (which is independent from them as a consequence of its independence from all $Z_i$), and of $x_n$ which is a given value.
This gives you a Markov Chain with infinite states, where every state has a transition probability of $p$ to the next state (+1) and $1-p$ to itself.
The same reasoning applies to $\Pi_n$. This will instead only have two states ${0,1}$. State $0$ is an attractor, so it has transition probability $p=1$ to itself, while from state $1$ has probability $p$ of transition to itself, and probability $1-p$ of going to state $0$.
Now, (3) is NOT a Markov chain.
$$\mathbb{P}(X_{n+1}=x_{n+1}|X_0=x_0...X_n=x_n)$$
$$\mathbb{P}(X_n + S_n+Z_{n+1}|X_0=x_0 ... X_n=x_n)$$
Here, however, we have no way of knowing the value of $S_n$ just by $X_n$. Knowing for example that $X_n=10$ would simply mean that $\sum_0^nS_i=10$, but we cannot infer the value of $S_n$ without peeking at the previous "history". Therefore, the markovian property does not hold.
(4), on the other hand, is a Markov Chain (I will use $(x,y)$ notation since now $X$ is multivariate).
$$\mathbb{P}(X_{n+1}=(x_{n+1},y_{n+1})|X_0=(x_{0},y_{0})...X_n=(x_{n},y_{n}))$$
$$\mathbb{P}((S_{n+1},\sum_0^{n+1}S_i)=(x_{n+1},y_{n+1})|X_0=(x_{0},y_{0})...X_n=(x_{n},y_{n}))$$
$$\mathbb{P}((S_{n}+Z_{n+1},\sum_0^{n}S_i+S_{n}+Z_{n+1})=(x_{n+1},y_{n+1})|X_0=(x_{0},y_{0})...X_n=(x_{n},y_{n}))$$
$$\mathbb{P}((x_{n}+Z_{n+1},y_n+x_{n}+Z_{n+1})=(x_{n+1},y_{n+1})|X_0=(x_{0},y_{0})...X_n=(x_{n},y_{n}))$$
Again, this is only a function of $Z_{n+1}$ (which is independent of previous $S_i$) and of $(x_n,y_n)$, and we can therefore remove conditioning on previous values as they are redundant.
$$\mathbb{P}((x_{n}+Z_{n+1},y_n+x_{n}+Z_{n+1})=(x_{n+1},y_{n+1})|X_n=(x_{n},y_{n}))$$
$$\mathbb{P}(X_{n+1}=(x_{n+1},y_{n+1})|X_n=(x_{n},y_{n}))$$
Which makes it Markovian.
I hope the logical passages are clear enough, otherwise let me know!
Best Answer
Q1: The above statement means that the probability of a random variable X being equal to some value x at time n + 1, given all the x values that came before it in the sequence, is equal to the probability of X being equal to some value x at time n + 1 given just the value of x that came before it. In other words, X at time n + 1 is only dependent on x at time n, not any other value of x. So in a sequence, you can say that X at time n + 1 is independent of all other x except X at time n.
Q2: By the answer to Q1, all values in the Markov Chain are not independent of each other because $P(X_{n+1} | X_n) \ne P(X_{n+1})$. After enough iterations, the chain (usually) converges in distribution so they would be identically distributed.
That's all I know.