Why the big difference
If your data is normally distributed or uniformly distributed, I would think that Spearman's and Pearson's correlation should be fairly similar.
If they are giving very different results as in your case (.65 versus .30), my guess is that you have skewed data or outliers, and that outliers are leading Pearson's correlation to be larger than Spearman's correlation. I.e., very high values on X might co-occur with very high values on Y.
- @chl is spot on. Your first step should be to look at the scatter plot.
- In general, such a big difference between Pearson and Spearman is a red flag suggesting that
- the Pearson correlation may not be a useful summary of the association between your two variables, or
- you should transform one or both variables before using Pearson's correlation, or
- you should remove or adjust outliers before using Pearson's correlation.
Related Questions
Also see these previous questions on differences between Spearman and Pearson's correlation:
Simple R Example
The following is a simple simulation of how this might occur.
Note that the case below involves a single outlier, but that you could produce similar effects with multiple outliers or skewed data.
# Set Seed of random number generator
set.seed(4444)
# Generate random data
# First, create some normally distributed correlated data
x1 <- rnorm(200)
y1 <- rnorm(200) + .6 * x1
# Second, add a major outlier
x2 <- c(x1, 14)
y2 <- c(y1, 14)
# Plot both data sets
par(mfrow=c(2,2))
plot(x1, y1, main="Raw no outlier")
plot(x2, y2, main="Raw with outlier")
plot(rank(x1), rank(y1), main="Rank no outlier")
plot(rank(x2), rank(y2), main="Rank with outlier")
# Calculate correlations on both datasets
round(cor(x1, y1, method="pearson"), 2)
round(cor(x1, y1, method="spearman"), 2)
round(cor(x2, y2, method="pearson"), 2)
round(cor(x2, y2, method="spearman"), 2)
Which gives this output
[1] 0.44
[1] 0.44
[1] 0.7
[1] 0.44
The correlation analysis shows that without the outlier Spearman and Pearson are quite similar, and with the rather extreme outlier, the correlation is quite different.
The plot below shows how treating the data as ranks removes the extreme influence of the outlier, thus leading Spearman to be similar both with and without the outlier whereas Pearson is quite different when the outlier is added.
This highlights why Spearman is often called robust.
![enter image description here](https://i.stack.imgur.com/4JBrS.png)
Spearman rank correlation is just Pearson correlation applied to ranks, a point often obscured by the emphasis on the simple computational short-cut formula for Spearman that is found in many books. So, I wouldn't rule out Fisher's z procedures for Spearman. There is a caution that the sampling distribution will differ at least a bit with Spearman -- as the sampling distribution of Spearman is irregular in detail -- and indeed that could bite hard with small sample sizes. But most things are problematic with small sample sizes. The caution that everything hinges on the data being treated as ranks is already the caution that applies to Spearman correlation.
I've got to suggest, however, that this line of enquiry may not prove very fruitful for you.
For a start, it is usually better to bring two groups into the same model if you can. Also, one rank correlation being stronger than another leaves the more important question of what the relationships and differences are, quantitatively, a bit in the background.
The data have to be really irregular for there to be no transformation or link function (logarithm? square root? reciprocal?) that won't bring them into reasonable shape for some kind of ANOVA or more general(ized) linear model. As it seems you have some kind of experiment, that would probably mesh better with your scientific objectives too.
(LATER) You did say "ordinal" and that is important. Much depends on what that means precisely. If ordinal means a five-point scale, something like an ordered logit or probit model may be appropriate. If ordinal means judgment-based scores of some kind, much depends on how they behave.
Best Answer
Pearson's r and Spearman's rho are both already effect size measures. Spearman's rho, for example, represents the degree of correlation of the data after data has been converted to ranks. Thus, it already captures the strength of relationship.
People often square a correlation coefficient because it has a nice verbal interpretation as the proportion of shared variance. That said, there's nothing stopping you from interpreting the size of relationship in the metric of a straight correlation.
It does not seem to be customary to square Spearman's rho. That said, you could square it if you wanted to. It would then represent the proportion of shared variance in the two ranked variables.
I wouldn't worry so much about normality and absolute precision on p-values. Think about whether Pearson or Spearman better captures the association of interest. As you already mentioned, see the discussion here on the implication of non-normality for the choice between Pearson's r and Spearman's rho.