Some times I can find in the very same source both terms, but without any explanation.
On the other hand, some paper only use one and some only use the another.
Are these different measures?
Solved – Are Family-wise Error and experiment-wise error completely interchangeable terms
hypothesis testingmultiple-comparisonsterminology
Related Solutions
My PhD thesis was on precisely the topic of how to best test for significant differences in EEG and I faced the same questions.
I found the optimal method is to use a mass-univariate test for each electrode and time/frequency point independently. This may be a t-test, or ANOVA (as in your case a repeated measures ANOVA), or even simply the mean differences, as long as you can justify that the test is valid for your hypothesis.
Then, precisely because there is a high correlation between neighbouring electrodes, and neighbouring time points, the 'trick' is to use the neighbourhood to enhance your original uni-variate measure (e.g. t-values), using threshold-free cluster-enhancement. Essentially this looks at the intensity of the test statistic and scales this value according to the corresponding strength of neighbouring values.
See my NeuroImage paper for details. There I show how the method is logically superior to existing methods, but also use simulations to show precisely its sensitivity and specificity against the methods you already mentioned as well as Statistical Parametric Mapping and Global Field Power.
I've created a user-friendly matlab toolbox to quickly and easily use the method. All you need is your data and the spatial locations of your electrodes and you're ready to go.
You can find the toolbox on Github here.
Feel free to contact me with any questions regarding the method.
I disagree strongly with @fcoppens leap from recognizing the importance of multiple-hypothesis correction within a single investigation to claiming that "By the same reasoning, the same holds if several teams perform these tests."
There is no question that the more studies are performed and the more hypotheses are tested, the more Type I errors will occur. But I think there's a confusion here over the meaning of "family-wise error" rates and how they apply in actual scientific work.
First, remember that multiple-testing corrections typically arose in post-hoc comparisons for which there were no pre-formulated hypotheses. It is not at all clear that the same corrections are required when there is a small pre-defined set of hypotheses.
Second, the "scientific truth" of an individual publication does not depend on the truth of each individual statement within the publication. A well-designed study approaches an overall scientific (as opposed to statistical) hypothesis from many different perspectives, and puts together different types of results to evaluate the scientific hypothesis. Each individual result may be evaluated by a statistical test.
By the argument from @fcoppens however, if even one of those individual statistical tests makes a Type I error then that leads to a "false belief of 'scientific truth'". This is simply wrong.
The "scientific truth" of the scientific hypothesis in a publication, as opposed to the validity of an individual statistical test, generally comes from a combination of different types of evidence. Insistence on multiple types of evidence makes the validity of a scientific hypothesis robust to the individual mistakes that inevitably occur. As I look back on my 50 or so scientific publications, I would be hard pressed to find any that remains so flawless in every detail as @fcoppens seems to insist upon. Yet I am similarly hard pressed to find any where the scientific hypothesis was outright wrong. Incomplete, perhaps; made irrelevant by later developments in the field, certainly. But not "wrong" in the context of the state of scientific knowledge at the time.
Third, the argument ignores the costs of making Type II errors. A type II error might close off entire fields of promising scientific inquiry. If the recommendations of @fcoppens were to be followed, Type II error rates would escalate massively, to the detriment of the scientific enterprise.
Finally, the recommendation is impossible to follow in practice. If I analyze a set of publicly available data, I may have no way of knowing whether anyone else has used it, or for what purpose. I have no way of correcting for anyone else's hypothesis tests. And as I argue above, I shouldn't have to.
Best Answer
I think they should not be regarded as identical; I think the family-wise error rate refers to the overall type I error rate for some specified collection of hypothesis tests, which might potentially be a subset of the tests in an experiment, or even the overall error rate across several experiments (for some reason), or not relate to an experiment at all, while the experiment-wise error rate could only reasonably refer to testing in an experiment and only to the family-wise error rate for that entire experiment.
Which is to say, to my mind the concept of experiment-wise error is a specific example (and perhaps the most common one) of family-wise error.
[What I find interesting is nobody seems to much concern themselves about the type II error rate on a family-wise basis -- at least not that I recall.]