I have a group of correlation coefficients (more than two). They are all dependent on one variable A in the form of r_A1, r_A2, r_A3….r_Ak, where 1, 2 …k denote other variables; they all have the same sample size.
My question is: what statistics should I use when I want to know whether any one of the correlation coefficients is any different to any of the others? I know if there are only two dependent correlation coefficients, this can be easily compared using most statistical tools, but this is a multiple correlations test. I know that a Chi-square test can be used to compare equality of several correlation coefficients (see http://home.ubalt.edu/ntsbarsh/business-stat/otherapplets/MultiCorr.htm for an example), but to my knowledge this approach is for testing the difference between INDEPENDENT correlation coefficients. So I am wondering whether there is any approach that is equivalent to Fisher's least significant difference that can be used to make comparisons among several dependent correlation coefficients?
EDIT: thanks @russ-lenth for your answer. In general, I found that CIs computed from lsmeans are larger than those computed using Fisher's Z method. Here's an example of the CIs that I get through the lsmeans function:
rep.meas lsmean SE df lower.CL upper.CL
M1 0.76914236 0.13325688 23 0.4934795 1.0448052
M2 0.82346705 0.11830361 23 0.5787374 1.0681967
M3 0.89294217 0.09386717 23 0.6987631 1.0871212
M4 -0.09985512 0.20747224 23 -0.5290441 0.3293339
M5 0.56183690 0.17249315 23 0.2050076 0.9186662
M6 0.79086279 0.12760947 23 0.5268825 1.0548431
M7 0.14667681 0.20625924 23 -0.2800029 0.5733566
Take M1 whose r = 0.769 as an example: the width of the CI from lsmeans is (1.0448-0.4935) 0.5513. The width of the CI computed from Fisher’s Z is (0.8948-0.5302) 0.3646, which is much smaller than the former. Is the difference between the widths of the two confidence intervals too large?
Best Answer
Well, here's an idea...
First standardize all of the variables (the model outputs as well as the reference variable).
Then fit a multivariate model with those standardized model outputs as a multivariate response variable (not predictors), and the common human-performance variable (standardized) as the predictor. Do not include an intercept, as it will be zero anyway. Then the regression coefficients will be equal to the correlation coefficients, due to the standardization, and their covariance matrix will be available; so you can estimate each pairwise difference and its standard error.
R example
This is using the
swiss
dataset provided with R. Here is the standardized datasetNote the covariances are just the correlations
Fit the multivariate model, looking at correlations with
Fertility
Here are the coefficients and variance-covariance matrix thereof
So to compare, say, the 2nd and 3rd correlation, here's the estimate
and its SE
I can trick the lsmeans package into doing it:
rep.meas
is the default name for the levels of the multivariate response. So far,swiss.lsm
is just estimating the mean, $(0,0,0,)$, but I'll change the linear function that it's using to be $1$ times each regression coefficientNow, here is the summary
and the pairwise comparisons:
If I want to compare the absolute correlations, just change the linear function