An urn contains two red balls and one green ball.
One ball was drawn yesterday, one ball was drawn today, and the final ball
will be drawn tomorrow. All of the draws are "without replacement".
Suppose you know that today's ball was red, but you have no information
about yesterday's ball. The chance that tomorrow's ball will be red
is 1/2. That's because the only two remaining outcomes for this random experiment are "r,r,g" and "g,r,r".
On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow.
This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property.
Update: For any random experiment, there can be several related processes some of which
have the Markov property and others that don't.
For instance, if you change sampling "without replacement" to sampling "with replacement" in the urn experiment above, the process of observed colors will have the Markov property.
Another example: if $(X_n)$ is any stochastic process you get a related Markov
process by considering the historical process defined by
$$H_n=(X_0,X_1,\dots ,X_n).$$ In this setup, the Markov property is trivially fulfilled
since the current state includes all the past history.
In the other direction, you can lose the Markov property by combining states, or
"lumping". An example that I used in this MO answer, is to take a random walk $(S_n)$ on
the integers, and define $Y_n=1[S_n>0]$. If there is a long string of time points with $Y_n=1$, then it is quite likely that the random walk is nowhere near zero and that the
next value will also be 1. If you only know that the current value is 1, you are not
as confident that the next value will be 1. Intuitively, this is why $Y_n$ doesn't have
the Markov property.
For cases of lumping that preserve the Markov property, see this MSE answer.
You could have a process that looks at the history arbitrarily far back. For example, you could have a coin that respects the gambler's fallacy: after a streak of $n$ times coming down one side, it has a $\frac1{n+1}$ probability of landing on that side again, and a $\frac{n}{n+1}$ probability of landing on the other side.
Then no $k^{\text{th}}$ order Markov chain can properly describe it. For any $k$, we can consider two situations: a streak of length $k$, and a streak of length $k+1$. These give different probabilities ($\frac1{k+1}$ and $\frac1{k+2}$) that the streak continues, but the $k^{\text{th}}$ order Markov chain can only treat them the same.
Best Answer
$$\mathsf P(X_{i+1}<x \mid X_i=x_i,X_{i-1}=x_{i-1},\ldots, X_0=x_0) ~=~\mathsf P(X_{i+1}<x\mid X_i=x_i)$$
For events $F,C,P$, the expression $\Pr(F\mid C, P)=\Pr(F\mid C)$ means that $F,P$ are conditionally independent given $C$.
In this example you have events for the future state, current state, and all prior states:
So, it follows from the Markov property: an event for a furture state is conditionally independent of values for all prior states when given a value for the current state.