Solved – Why is usually the acceptable probability of type 1 and type 2 errors different

effect-sizestatistical-power

This question is raised by my supervisor and I don't know how to explain.

Usually the accepted confident level is 0.95 which means that the probability of type 1 error is 5%. But the usually accepted power is 0.8 (Cohen, 1988) which means that the probability of type 2 error is 20%. Why we can accept higher probability of type 2 error than type 1 error? Is there any statistical reason behind that?

He also asked the physical meaning of power = 0.8 (why it is selected as a criteria) which I also have no idea to explain it.

And when we use power analysis to design the experiment we may select effective size 0.3, 0.5 or 0.8 to represent the small, medium and large effects. And my supervisor asked why these numbers are selected. My understanding is that these numbers are suggested based on experience. He immediately asked me what is the experience. I am really frustrating about such questions. My major is not statistics and I need to spend a lot of time on such questions, which I think may not be meaningful. Can any one suggest if such questions are really meaningful or not? If yes, how shall I find out the answer.

Best Answer

Neither the 5% type one error rate nor the 80% figure for power are universal. For example, particle physicists tend to use a "5 sigma" criterion that corresponds to a notional type I error rate that is roughly on the order of one in a million. Indeed, I doubt your average physicist has even heard of Cohen.

But one reason why the two error rates you quote should be different would be that the cost of the two error types would not be the same.

As to why the type I error rate is often taken at 5%, part of the reason (some of the historical background for the convention) is discussed here.