[Math] Two types of errors, type-$1$ error and type-$2$ error, can not be minimized simultaneously when the sample size $n$ is already fixed. How

hypothesis testingstatistical-inferencestatistics

I read in some of the books that the two types of errors, type-$1$ error and type-$2$ error, can not be minimized simultaneously in Neyman Pearson Theory of testing of hypothesis when the sample size $n$ is already fixed.I am clear up to this that if one tries to reduce the type-$2$ error then the reduction in the number of time committing the type-$2$ error leads to the number of times rejection of the null hypothesis when it is actually false. that is one can say that it leads to the increment of correct rejection. but then how it can be connected to the increment in type-$1$ error, is my problem.

Best Answer

In the most basic definition of hypothesis testing (for example, the Neyman-Pearson Fundamental Lemma), there is an Acceptance region and a Rejection region for the data (or test statistic). Together the Acceptance and Rejection regions account for all possible experimental outcomes.

If the data fall in the Rejection region, $H_0$ is rejected; and if data fall in the Acceptance region, $H_0$ is accepted. (If you have philosophical difficulties with the word 'accepted', define it to mean 'not rejected' just to avoid double or triple negatives.)

Type I error is the probability of Rejecting $H_0$ when it is true; usually we say it's probability is $\alpha$. Type II error is the probability of Accepting $H_0$ when it is false; usually we say it's probability is $\beta$. Both $\alpha$ and $\beta$ depend on the definition of the Rejection region.

Within this framework it is easy to see the answer to your question. If the Rejection region is made more extensive, then $\alpha$ tends to increase. But this means that the Acceptance region must get smaller and $\beta$ tends to decrease.

Related Question