I have 50 variables in my dataset. I have correlated each variable against each variable, thus I have $49·50/2 = 1225$ unique correlations when no variable is tested against itself. Now, suppose I want to correct the statistics for multiple comparison, and that for some reason (well, just for learning now) I want to use the Bonferroni correction. Let the threshold for significance be $\alpha = 0.05$. Is the corrected threshold $0.05/1200$ (where 1225 is number of tests) or $0.05/49$ (because each variable was correlated with 49 other)?
Solved – About the Bonferroni correction
bonferronimultiple-comparisonsstatistical significance
Related Solutions
It sounds to me like this is exploratory research / data analysis, not confirmatory. That is, it doesn't sound like you started with a theory that said only extroversion should be related to PCT for some reason. So I wouldn't worry too much about alpha adjustments, as I think of that as more related to CDA, nor would I think that your finding is necessarily true. Instead, I would think about it as something that might be true, and play with these ideas / possibilities in light of what I know about the topics at hand. Having seen this finding, does it ring true or are you skeptical? What would it mean for the current theories if it were true? Would it be interesting? Would it be important? Is it worth running a new (confirmatory) study to determine if it's true, bearing in mind the potential time, effort and expense that that entails? Remember that the reason for Bonferroni corrections is that we expect something to show up when have so many variables. So I think a heuristic can be 'would this study be sufficiently informative, even if the truth turns out to be no'? If you decide that it's not worth it, this relationship stays in the 'might' category and you move on, but if it is worth doing, test it.
Preface: There are many different was to adjust for multiple comparisons. Olive Dunn proposed the Bonferroni adjustment in 1961, and the multiple comparisons literature (see, for example, Shaffer, 1995) has grown to a variety of family-wise error rate adjustment methods (of which Bonferroni is the simplest), and the more recent false discovery rate adjustment methods. Moreover, adjustments can either be made to $\alpha$, or the math may be inverted and instead applied to adjust p-values (sometimes adjusted p-values are called q-values)—my own preference is to adjust $\alpha$, since adjustments to p may need a clumsy upper-truncation at 1.0 to retain interpretability as a probability. Your question, and my answer applies regardless of which of these methods you choose, and whether you apply the adjustment to $\alpha$ or to the p-values.
You would apply the Bonferroni to post hoc multiple comparisons following rejection of a one-way ANOVA. In fact that is a canonical example of when to apply the Bonferroni adjustment. These pairwise tests are not quite the same thing as a bunch of standard t tests, because following rejection of an ANOVA the t test statistics are calculated using the pooled variance implicit in the ANOVA's null hypothesis, rather than variance from the two specific groups compared for a single test statistic.
You are correct: we would use multiple comparisons adjustments when make many statistical tests, as in the case of the t tests for the $\beta$ estimates in multiple regression, or in feature selection of N-way ANOVA.
References
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64.
Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology, 46:561–584.
Best Answer
With Bonferroni correction the divisor is equal to the number of tests you carry out, dependent or independent
It helps to understand the purpose of the Bonferroni correction. You are testing correlations between variables. Let’s assume your null hypothesis for any two variables is that the correlation is 0. (Any null hypothesis will suffice.) Your significance threshold is $\alpha = 0.05$. In other words, there is a 5% chance that you will reject the null hypothesis erroneously. This is known as a type 1 error or a false positive.
Now, let’s say you did 100 tests, all at the $\alpha = 0.05$ level. You would expect $5%$ of these to give a false positive ( ie to fail by chance alone). If you do 1225 tests then you expect 5% = ~61 false positives. This is quite a lot! Bonferroni offers a level of protection against this scenario. You can think of it as familywise protection as it offers a family of tests a single level of protection against even one false positive. Instead of testing with an $\alpha = 0.05$ threshold, you perform each test at the $\alpha = 0.5/1225 = ~.0000408$ threshold. In this case Bonferroni reduces the probability of even one false positive amongst all tests to an $\alpha = 0.05$ threshold.
Bonferonni is very conservative and there exists several improvements. My favourite is False Discovery Rate