Before going forward with implementing any kind of estimation/inference procedure in order to detect cross-correlations, you should know that stationary and non-stationary data "do not mix", and it is usually misleading to try to find any kind of association (correlation, co-movement e.t.c.) between them (which nevertheless may exist).
To look at a concrete example:
Assume the following two AR(1) time series, $z_t$ stationary, $y_t$ containing a unit root and hence non-stationary:
$$z_t = \gamma + \delta z_{t-1} + u_t,\qquad E(u_t)=0, \; Var(u_t) = \sigma^2_u, \; E(u_tu_s) = 0 \; t\neq s,\; \delta <1, z_0=0$$
$$y_t = \alpha + y_{t-1} + \varepsilon_t,\qquad E(\varepsilon_t)=0, \; Var(\varepsilon_t) = \sigma^2_\varepsilon, \; E(\varepsilon_t\varepsilon_s) = 0 \; t\neq s,\; y_0=0$$
Assume now that their white-noise disturbances are contemporaneously correlated, i.e. that $E(u_t\varepsilon_t) = v_{u\varepsilon} \neq 0$. Hence the series are not independent. What will the attempt to calculate their correlation give us?
By repeated substitution (and assuming that the $z_t$ series is long enough), the two series can be written:
$$ z_t = \frac {\gamma}{1-\delta} + \sum_{j=0}^{t-1}\delta^ju_{t-j} \Rightarrow E(z_t) = \frac {\gamma}{1-\delta}, \; Var(z_t) = \frac {\sigma^2_u}{1-\delta^2}$$
$$ y_t = \alpha t + \sum_{j=0}^{t-1}\varepsilon_{t-j} \Rightarrow E(y_t) = \alpha t, \; Var(y_t) = \sigma^2_{\varepsilon}t$$
The contemporaneous correlation coefficient between the two is
$$ \rho(z_t,y_t) = \frac{Cov(z_t,y_t)}{\sigma_{z_t},\sigma_{y_t}} $$
We have $$ Cov(z_t,y_t) = E(z_ty_t) - E(z_t)E(y_t) $$
$$= E\Big[\Big(\frac {\gamma}{1-\delta} + \sum_{j=0}^{t-1}\delta^ju_{t-j}\Big)\Big(\alpha t + \sum_{j=0}^{t-1}\varepsilon_{t-j}\Big)\Big] - \frac {\gamma}{1-\delta}\alpha t $$
$$ = E\Big(\sum_{j=0}^{t-1}\delta^ju_{t-j}\sum_{j=0}^{t-1}\varepsilon_{t-j}\Big) = E\Big(\sum_{j=0}^{t-1}\delta^ju_{t-j}\varepsilon_{t-j}\Big) = \frac {v_{u\varepsilon}}{1-\delta}$$
Therefore
$$\rho(z_t,y_t) = \frac{\frac {v_{u\varepsilon}}{1-\delta}}{\Big(\frac {\sigma^2_u}{1-\delta^2}\sigma^2_{\varepsilon}t\Big)^{\frac12}} = \frac{\sqrt{1-\delta^2}}{1-\delta} \frac{v_{u\varepsilon}}{\sigma_u\sigma_{\varepsilon}}\frac{1}{\sqrt{t}}$$
We see that the theoretical correlation coefficient is monotonically decreasing in time, i.e. it is also non-stationary, and different at each and every point in your sample. So any attempt to estimate it from the available sample, will give some "average" correlation coefficient which, moreover, will only hold for the specific time period that the sample covers. And being the average of a decreasing non-linear function of time, it will be difficult to interpret meaningfully.
The point of all this algebra was: you have to read about stationary and non-stationary series, in order to see whether you can extract some meaningful conclusion from their statistical study together. Examine at least, whether they are co-integrated.
Best Answer
I am not surprised by these results. I got them very often. The KPSS test for some reason is very sensitive, if not overly so, as it rejects the vast majority of variables as stationary. In other words, it diagnoses almost everything as non-stationary. Because of that, I have stopped using the KPSS test for stationary diagnostics. And, I rely on the other tests that seem fairer and more accurate on this issue. The two other tests you use (PP and ADF) generate far more reliable results on this count. Also, visually you can tell that your variable appears pretty stationary.
I am revising my answer from 2017. The KPSS test after all does not reject stationarity as often as I thought it did. If you run the test in R, it almost always gives you a p-value of 0.1 making it a bit difficult to accept the null hypothesis that the variable is stationary. But, underneath the calculation you see a red written warning that states "... p-value is greater than the printed value shown." It means that whether the p-value is 0.11 or 0.99, the test result will show it as 0.1. In other words, the KPSS does not reject that a variable is stationary nearly as often as I thought it did. And, now I pretty much use it again all the time in such circumstance. It is nice to run a stationarity test that runs in the opposite direction of the others as its null is that the variable is stationary instead of non-stationary as in all the other tests.
This does not detract that different tests will give you often contradicting results. I typically run three tests, and if the variable passes 2 out of 3 tests, I deem it as adequate evidence that the variable is stationary.
Last but not least, when I look at your time series graph your variable appears stationary enough so that it should not render any model that you build using it "spurious."