Solved – What are some examples of anachronistic practices in statistics

philosophicalreferences

I am referring to practices that still maintain their presence, even though the problems (usually computational) they were designed to cope with have been mostly solved.

For example, Yates' continuity correction was invented to approximate Fisher's exact test with $\chi^2$ test, but it is no longer practical since software can now handle Fisher's test even with large samples (I know this may not be a good example of "maintaining its presence", since textbooks, like Agresti's Categorical Data Analysis, often acknowledge that Yates' correction "is no longer needed").

What are some other examples of such practices?

Best Answer

It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tables of critical values. Now good software will give $P$-values directly. Indeed, good software lets you customise your analysis and not depend on textbook tests.

This is contentious if only because some significance testing problems do require decisions, as in quality control where accepting or rejecting a batch is the decision needed, followed by an action either way. But even there the thresholds to be used should grow out of a risk analysis, not depend on tradition. And often in the sciences, analysis of quantitative indications is more appropriate than decisions: thinking quantitatively implies attention to sizes of $P$-values and not just to a crude dichotomy, significant versus not significant.

I will flag that I here touch on an intricate and controversial issue which is the focus of entire books and probably thousands of papers, but it seems a fair example for this thread.

Related Question