Here is a less garbled version of the Wikipedia definition. (Use TheBridge's correction for the definition of ${\cal F}_\tau$.)
The post-$\tau$ process $X_{\tau+\cdot}$ is defined on the event $\{\tau<\infty\}$ by
$$
X_{\tau+t}(\omega) = X_{\tau(\omega)+t}(\omega),\qquad t\ge 0,
$$
for $\omega\in\{\tau<\infty\}$. One way to state the strong Markov property is this: The conditional distribution of $X_{\tau+\cdot}$ given ${\cal F}_\tau$ is (a.s.) equal to the conditional distribution of
$X_{\tau+\cdot}$ given $\sigma\{X_\tau\}$, on the event $\{\tau<\infty\}$. More precisely,
$$
P[ X_{\tau+t}\in B|{\cal F}_\tau] = P[ X_{\tau+t}\in B|X_\tau],\qquad \hbox{almost surely on }\{\tau<\infty\},
$$
for all $t\ge 0$, and all measurable subsets $B$ of the state space of $X$.
This is equivalent to the statement that $X_{\tau+\cdot}$ and ${\cal F}_\tau$ are conditionally independent, given $X_\tau$:
$$
P[ F\cap \{X_{\tau+t}\in B\}|X_\tau] = P[ F|X_\tau]\cdot P[X_{\tau+t}\in B|X_\tau],\qquad \hbox{almost surely on }\{\tau<\infty\},
$$
An urn contains two red balls and one green ball.
One ball was drawn yesterday, one ball was drawn today, and the final ball
will be drawn tomorrow. All of the draws are "without replacement".
Suppose you know that today's ball was red, but you have no information
about yesterday's ball. The chance that tomorrow's ball will be red
is 1/2. That's because the only two remaining outcomes for this random experiment are "r,r,g" and "g,r,r".
On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow.
This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property.
Update: For any random experiment, there can be several related processes some of which
have the Markov property and others that don't.
For instance, if you change sampling "without replacement" to sampling "with replacement" in the urn experiment above, the process of observed colors will have the Markov property.
Another example: if $(X_n)$ is any stochastic process you get a related Markov
process by considering the historical process defined by
$$H_n=(X_0,X_1,\dots ,X_n).$$ In this setup, the Markov property is trivially fulfilled
since the current state includes all the past history.
In the other direction, you can lose the Markov property by combining states, or
"lumping". An example that I used in this MO answer, is to take a random walk $(S_n)$ on
the integers, and define $Y_n=1[S_n>0]$. If there is a long string of time points with $Y_n=1$, then it is quite likely that the random walk is nowhere near zero and that the
next value will also be 1. If you only know that the current value is 1, you are not
as confident that the next value will be 1. Intuitively, this is why $Y_n$ doesn't have
the Markov property.
For cases of lumping that preserve the Markov property, see this MSE answer.
Best Answer
An example is $X_t=\max\{t-T,0\}$, where $T$ is exponentially distributed.
For every fixed nonnegative $t$, conditionally on $\mathscr F_t^X$, $(X_{t+s})_{s\ge0}$ is distributed like $(X_s)_{s\ge0}$ on $[X_t=0]$ and like $(X_t+s)_{s\ge0}$ on $[X_t>0]$. But $(X_{T+s})_{s\ge0}$ is not distributed like $(X_s)_{s\ge0}$ on $\Omega=[X_T=0]$.