I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other words, it diagnoses almost everything as non-stationary. Because of that, I have stopped using the KPSS test for stationary diagnostics. And, I rely on the other tests that seem fairer and more accurate on this issue. The two other tests you use (PP and ADF) generate far more reliable results on this count. Also, visually you can tell that your variable appears pretty stationary.
I am revising my answer from 2017. The KPSS test after all does not reject stationarity as often as I thought it did. If you run the test in R, it almost always gives you a p-value of 0.1 making it a bit difficult to accept the null hypothesis that the variable is stationary. But, underneath the calculation you see a red written warning that states "... p-value is greater than the printed value shown." It means that whether the p-value is 0.11 or 0.99, the test result will show it as 0.1. In other words, the KPSS does not reject that a variable is stationary nearly as often as I thought it did. And, now I pretty much use it again all the time in such circumstance. It is nice to run a stationarity test that runs in the opposite direction of the others as its null is that the variable is stationary instead of non-stationary as in all the other tests.
This does not detract that different tests will give you often contradicting results. I typically run three tests, and if the variable passes 2 out of 3 tests, I deem it as adequate evidence that the variable is stationary.
Last but not least, when I look at your time series graph your variable appears stationary enough so that it should not render any model that you build using it "spurious."
No, roots outside the unit circle gives asymptotic stationarity, not strict stationarity: For a standard ARMA time-series model, the recursive formula for a model locks in the autocorrelation of the process, but does not lock in the marginal/joint distribution for the process. To get the latter you also need to specify the marginal distribution at some point in time, and this specification then locks in the full joint distribution of the process. (Strictly, for an ARMA model, you need to specify the joint distribution of $p$ elements, where $p$ is the degree of the autoregressive characteristic polynomial.)
If all of the roots of an AR process are outside the unit circle (no unit roots and no explosive roots), and the error terms are IID, then the process will have mean-reverting behaviour and will converge asymptotically to a stationary distribution. This asymptotic convergence to a stationary distribution is a weaker property than strict stationarity. It is possible for such a process to be non-stationary by specifying an "anchoring distribution" that is not the asymptotic stationary distribution.
Stationarity for an AR(1) process: Consider a standard first-order auto-regressive process defined by the recursive equation:
$$\begin{matrix}
X_{t+1} = \mu + \alpha (X_t - \mu) + \varepsilon_t & & \varepsilon_t \sim \text{ IID N}(0, \sigma^2).
\end{matrix}$$
Suppose that $| \alpha | <1$ so that this has a single root $1/\alpha$ outside the unit circle. This model form defines a broad set of possible time-series processes --- i.e., all processes that obey the requisite recursive equation and have the specified noise distribution. However, with this recursive equation you have not yet specified an "anchoring distribution" for any particular point in the series, and so this presently encompasses some stationary and some non-stationary processes. Now, it can be shown that if $\sigma>0$ then (regardless of the anchoring distribution) this process has the asymptotic stationary distribution:
$$X_{\infty} \sim \text{N} \Big( \mu, \frac{\sigma^2}{1 - \alpha^2} \Big).$$
In order to form a strictly stationary process you would need to impose the requirement that the marginal distribution at some arbitrary time is equal to this anchoring distribution. Alternatively, if you want to specify a non-stationary process, you could impose a different marginal distribution at a particular time (perhaps with a different mean or variance, or maybe even a different distributional form).
Non-stationary AR(1) model (with root still inside the unit circle): Suppose you specify an "anchoring distribution" for the element $X_0$ that is still a normal distribution, but with some arbitrary mean and variance:
$$X_0 \sim \text{N} ( \mu_0, \sigma_0^2 ).$$
With this "anchoring distribution" it can be shown that the series of marginal distributions is:
$$X_t \sim \text{N} \Big( (1- \alpha^t) \mu + \alpha^t \mu_0, \alpha^{2t} \sigma_0^2 +\frac{1-\alpha^{2t}}{1-\alpha^2} \sigma^2 \Big).$$
For $\mu_0 \neq \mu$ this series has non-stationary mean, though it converges to the asymptotic mean $\mu$. For $\sigma_0 \neq \sigma$ this series has non-stationary variance, though it converges to the asymptotic variance $\sigma^2 / (1-\alpha^2)$. In this case the process is non-stationary, but it gets closer and closer to being stationary the further you get from the prescribed "anchoring point" of the process.
Best Answer
Here is an example of a non-stationary series that not even a white noise test can detect (let alone a Dickey-Fuller type test):
Yes, this might be surprising but This is not white noise.
Most non-stationary counter example are based on a violation of the first two conditions of stationary: deterministic trends (non-constant mean) or unit root / heteroskedastic time series (non-constant variance). However, you can also have non-stationary processes that have constant mean and variance, but they violate the third condition: the autocovariance function (ACVF) $cov(x_s, x_t)$ should be constant over time and a function of $|s-t|$ only.
The time series above is an example of such a series, which has zero mean, unit variance, but the ACVF depends on time. More precisely, the process above is a locally stationary MA(1) process with parameters such that it becomes spurious white noise (see References below): the parameter of the MA process $x_t = \varepsilon_t + \theta_1 \varepsilon_{t-1}$ changes over time
$$\theta_1(u) = 0.5 - 1 \cdot u,$$
where $u = t/ T$ is normalized time. The reason why this looks like white noise (even though by mathematical definition it clearly isn't), is that the time varying ACVF integrates out to zero over time. Since the sample ACVF converges to the average ACVF, this means that the sample autocovariance (and autocorrelation (ACF)) will converge to a function that looks like white noise. So even a Ljung-Box test won't be able to detect this non-stationarity. The paper (disclaimer: I am the author) on Testing for white noise against locally stationary alternatives proposes an extension of Box tests to deal with such locally stationary processes.
For more R code and more details see also this blog post.
Update after mpiktas comment:
It is true that this might look just like a theoretically interesting case that is not seen in practice. I agree it is unlikely to see such spurious white noise in a real world dataset directly, but you will see this in almost any residuals of a stationary model fit. Without going into too much theoretical detail, just imagine a general time-varying model $\theta(u)$ with a time varying covariance function $\gamma_{\theta}(k, u)$. If you fit a constant model $\widehat{\theta}$, then this estimate will be close to the time average of the true model $\theta(u)$; and naturally the residuals will now be close to $\theta(u) - \widehat{\theta}$, which by construction of $\widehat{\theta}$ will integrate out to zero (approximately). See Goerg (2012) for details.
Let's look at an example
So we fit fractional noise with parameter $\widehat{d} = 0.23$ (since $\widehat{d} < 0.5$ we think everything is fine and we have a stationary model). Let's check residuals:
Looks good right? Well, the issue is that the residuals are spurious white noise. How do I know? First, I can test it
and second, we know from literature that the tree ring data is in fact locally stationary fractional noise: see Goerg (2012) and Ferreira, Olea, and Palma (2013).
This shows that my -- admittedly -- theoretically looking example, is actually occurring in most real world examples.