It seems to me that the time-series are loading on a common trend somehow, and this is generating this behavior in the correlations.
If your time series A and B share a common trend, their absolute values will be highly correlated (decreasing with the variance of the individual series' noise term). However, once you differentiate the series, the only correlation that remains is in the noise term, which will be low if the noise terms are iid. This should expain (1). A simple code in R shows what I mean:
x <- seq(1,10,length.out=1000) + rnorm(1000)
y <- seq(1,10,length.out=1000) + rnorm(1000)
cor(x,y) # 0.8664099
dx <- diff(x)
dy <- diff(y)
cor(dx,dy) #-0.005354925
(2) is a little harder (and maybe farfetched, but I dont know your data and all I am saying is that this is a plausible explanation - its likelihood depends on your data)
If D,C load on a trend with opposing signs but they load on a correlated noise term with the same sign, then this pattern could be observed as well. A more realistic pattern would be if they would each have a random iid noise term. This would generate high correlations in returns - due to the correlated noise term and low correlation in absolute value - due to the trend with opposing signs. More code to exemplify:
rnd <- rnorm(1000)
z <- -seq(1,10,length.out=1000) + rnd
w <- seq(1,10,length.out=1000) + 0.5*rnd + rnorm(1000)
cor(z,w) # -0.8968412
dw <- diff(w)
dz <- diff(z)
cor(dw,dz) # 0.4667939
You asked for an explanation - I gave an example of how it can happen. Not sure if it answers your question, but I hope it helps you pinning down what may be the case.
To stir the pot a little I suggest that it primarily means that one too many correlation coefficients was estimated. It is better to choose a measure based on statistical principles and stick with it. Unless one has prior evidence strongly suggesting linearity and some confidence that extreme values that would distort the result have a very small chance of being sampled, the default position would be to use Spearman's $\rho$. It is resistant to extreme values and is efficient under non-linearity as long as the relationship is monotonic (doesn't go up then back down or down then back up). $\rho$ quantifies the degree to which Y goes up (or down) as X goes up. To top it off were normality to actually hold, $\rho$ is $\frac{3}{\pi}$ as efficient as Pearson's $r$. A loss of 0.05 efficiency under ideal conditions for $r$ is a small price to pay for $\rho$ having a much higher efficiency than $r$ under non-normality in many cases.
Best Answer
It is usually a test indicating whether one can infer that the "true" (population) correlation is non-zero. $$ \begin{align} H_0&: \textrm{The two variables are uncorrelated. } &(r = 0) \\ H_a&: \textrm{The two variables are correlated. } &(r \ne 0) \\ \end{align} $$
One generally only has access to a sample of values from the two variables of interest. You might imagine that it's easy to infer a strong correlation between two variables from a small sample, but more data is required to determine whether an apparent relationship is a weak correlation or just noise. The formula for the test statistic backs up this intuition: it's a function of the sample size ($n$) and the sample correlation ($r$). One way to test this is via the t distribution. You compute:
$$t^* \approx\dfrac{r\sqrt{n-2}}{\sqrt{1-r^2}}$$
then use the $t_{n-2}$ distribution to convert this into a $p$-value, which tells you the probability of seeing a correlation at least this large in your sample if the population correlation is zero. Other approaches use a slightly different "exact" formula, which is again only a function of $r$ and $n$ and can be interpreted in the same way.
Bear in mind that this really tells you what you can claim, based on a sample: a large $p$-value does not necessarily mean that the correlation is precisely zero, just that you can't say whether it is/isn't given your data.
This is what Matlab's
corr
, SciPy'sscipy.stats.mstats.pearsonr
, and R'scor.test
by default. There are, of course, other tests one can run on correlations (e.g., to compare two correlations), so check to make sure.