Does the computation of an N x N correlation matrix for N unrelated variables require multiple comparisons correction for all the computed pairwise correlations (assuming each computed correlation is a 'comparison' in the sense of being a 'statistical test')? If so, would FWER be a more appropriate measure than Bonferroni, or do any assumptions about the variables have to be made before using FWER? For instance, does it make a difference in what correction should be used whether the variables are a-priori likely to be correlated (e.g. test scores from the same group of subjects) vs uncorrelated (e.g. each variable represents the test score of a different sample)?
Solved – Multiple comparisons for correlation matrix
correlationmultiple-comparisonsr
Related Solutions
This first part of my response won't address your two questions directly since what I am suggesting departs from your correlational approach. If I understand you correctly, you have two blocks of variables, and they play an asymmetrical role in the sense that one of them is composed of response variables (performance on four cognitive tests) whereas the other includes explanatory variables (measures of blood flow at several locations). So, a nice way to answer your question of interest would be to look at PLS regression. As detailed in an earlier response of mine, Regression with multiple dependent variables?, the correlation between factor scores on the first dimension will reflect the overall link between these two blocks, and a closer look at the weighted combination of variables in each block (i.e., loadings) would help interpreting the contribution of each variable of the $X$ block in predicting the $Y$ block. The SPSS implementation is detailed on Dave Garson's website. This prevents from using any correction for multiple comparisons.
Back to your specific questions, yes the Bonferroni correction is known to be conservative and step-down methods are to be preferred (instead of correcting the p-values or the test statistic in one shot for all the tests, we adapt the threshold depending on the previous HT outcomes, in a sequential manner). Look into SPSS documentation (or Pairwise Comparisons in SAS and SPSS) to find a suitable one, e.g. Bonferroni-Holm.
@John has a nice answer. I particularly like the discussion about fishing expeditions and how alpha-adjustment may not be necessary. I want to add one additional aspect to this discussion. With hypothesis testing, there are two different kinds of errors to worry about: type I and type II (also called alpha error and beta error). Both kinds are bad, and we want to avoid both of them. When people talk about alpha-adjustment, they are focusing only on the possibility of type I errors (that is, saying there is a difference when there isn't one). However, adjusting alpha to minimize type I errors necessarily decreases power. Thus, it necessarily increases the probability of type II errors (that is, saying there isn't a difference when in fact there is). In addition, it's worth noting that a-priori there is no reason to believe that type I errors are worse than type II errors (despite the fact that everyone seems to assume that this must be so). Rather, which is worse will vary from situation to situation and is a judgment that must be made by the researcher. In other words, deciding on a strategy for testing multiple comparisons (e.g., an alpha-adjustment strategy) one must consider the effect of the strategy on both type I and type II errors and balance these effects relative to: the severity of these errors, how much data you have, and the cost of gathering more.
On a different note, from your description it seems to me that your situation would best be analyzed by using a factorial ANOVA, with sex as factor 1, marital status as factor 2, language as factor 3, and age as factor 4. From the description (and I recognize that it is sparse) I don't see why a cell means approach (i.e., one-way ANOVA) is preferable. If you have no interest in interactions, the main effects from the factorial ANOVA are already orthogonal (at least if the $n$s are the same), and Bonferroni corrections are not relevant. Certainly it would still be possible to have more than 5% type I errors, but I'm a big believer in @John's fourth paragraph; when I'm testing theoretically suggested, a-priori, orthogonal contrasts, I don't use alpha-adjustments.
Best Answer
This depends on what question you are trying to answer and what your strategy is.
I like to think about what would happen to my conclusions if I were to add to my data some additional columns of randomly generated noise. In your case this would add more correlations.
If you will declare success/significance if any of the correlations are significant (fishing for significance) then yes, you need to do a correction for multiple comparisons because if the truth is nothing is correlated, but you add a bunch of random noise variables and don't adjust, then you will likely see something significant by chance.
If on the other hand there are specific comparisons that are of interest and would have been of interest if only those 2 variables had been in the study/dataset, then you probably don't want to adjust for multiple comparisons. Think about a case where 2 variables are correlated, but you would prefer them not to be (what I want to eat, but my wife doesn't want me to eat vs. a measure of my health), you could add a bunch of random noise variables and adjust for multiple comparisons and the adjustment would change a significant result into a non-significant result (great for justifying my snack, but not really honest).