McNemar's test solves this problem. (Thanks to Glen_b for mentioning this!) It is intended for paired data, where the observations are boolean -- a perfect fit. It is also easy to compute, which is convenient.
See also Paired t-test for binary data for another instance of a closely related statistical hypothesis testing problem.
The coefficient of correlation and paired t-test are getting at different things. The two tests don't need to align in terms of statistical significance. Consider the following four scenarios, coded in R.
# same mean, no correlation
# t.test Not significant
# cor.test Not significant
options(scipen = 99)
set.seed(1)
s1 <- rnorm(20,0,1)
s2 <- rnorm(20,0,1)
t.test(s1,s2,paired=T)$p.value
cor.test(s1,s2)$p.value
# different means, no correlation
# t.test Significant
# cor.test Not significant
set.seed(2)
s1 <- rnorm(20,0,1)
s2 <- rnorm(20,2,1)
t.test(s1,s2,paired=T)$p.value
cor.test(s1,s2)$p.value
# different means, high correlation
# t.test Significant
# cor.test Significant
set.seed(3)
s1 <- rnorm(20)
s2 <- s1+2+rnorm(20,0,0.5)
t.test(s1,s2,paired=T)$p.value
cor.test(s1,s2)$p.value
# same means, high correlation
# t.test Not significant
# cor.test Significant
set.seed(4)
s1 <- rnorm(20)
s2 <- s1+rnorm(20,0,0.5)
t.test(s1,s2,paired=T)$p.value
cor.test(s1,s2)$p.value
Not seeing a significant correlation between your two tests may be a sign the measurement error of your tests is high for your context. You want the standard deviations of your samples to be close to what you would see in practice and you need them to be much greater than your measurement error to detect a correlation in only 20 samples. Consider this final example where the measurement error is high in sample 2.
# same means, low correlation because
# high measurement error in sample 2
# t.test Not significant
# cor.test Not significant
set.seed(5)
s1 <- rnorm(20)
s2 <- s1+rnorm(20,0,3)
t.test(s1,s2,paired=T)$p.value
cor.test(s1,s2)$p.value
Best Answer
You seem to be conflating the terms paired and pairwise. t-tests are pairwise when you have multiple groups that you compare to each other. They are paired if you have in each group the same number of measurements with matched sample points (within subjects). In your case, it is paired and pairwise, but the concepts are different.
There are within subject Anovas that consider the data paired. I'm not an expert here, since I can't use them in my experiments. In general, you should know that Anova + post hoc t-tests are often bundled for convenience, not for binding statistical reasons. The Anova itself cannot tell you very much. Only if there are significant differences among some of the groups or not. If you want information about specific pairs of groups, look at the t-tests.
Don't compute the differences before handing the data to an ANOVA or you will be testing for equality of differences where you want to test for equality of measures. (Unless the SAS documentation explicitly says so, which I don't believe.)
For the post-hoc tests, just subtract the two columns before entering the difference as one column into the t-test. In addition, you're doing pairwise t-tests which means that you also need to control the family wise error rate with a Bonferroni correction or Holm's method. Those two things are independent from each other and the t-tests are also independent from the ANOVA. So if you want to to the Anova and t-tests, better not use the Tuckeys tests, you will lose power. Even if they come with the Anova (which they shouldn't with a within subjects Anova), you can simply throw them out and do your own.