I read in some of the books that the two types of errors, type-$1$ error and type-$2$ error, can not be minimized simultaneously in Neyman Pearson Theory of testing of hypothesis when the sample size $n$ is already fixed.I am clear up to this that if one tries to reduce the type-$2$ error then the reduction in the number of time committing the type-$2$ error leads to the number of times rejection of the null hypothesis when it is actually false. that is one can say that it leads to the increment of correct rejection. but then how it can be connected to the increment in type-$1$ error, is my problem.
[Math] Two types of errors, type-$1$ error and type-$2$ error, can not be minimized simultaneously when the sample size $n$ is already fixed. How
hypothesis testingstatistical-inferencestatistics
Related Solutions
Joint density of the sample $(X_1,X_2,\ldots,X_n)$ is
$$f_{\theta}(x_1,\ldots,x_n)=\exp\left(-\sum_{i=1}^n(x_i-\theta)\right)\mathbf1_{x_{(1)}>\theta}\quad,\,\theta>0$$
By N-P lemma, a most powerful test of size $\alpha$ for testing $H_0:\theta=\theta_0$ against $H_1:\theta=\theta_1(>\theta_0)$ is given by $$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }\lambda(x_1,\ldots,x_n)>k\\0&,\text{ if }\lambda(x_1,\ldots,x_n)<k\end{cases}$$
, where $$\lambda(x_1,\ldots,x_n)=\frac{f_{\theta_1}(x_1,\ldots,x_n)}{f_{\theta_0}(x_1,\ldots,x_n)}$$
and $k(>0)$ is such that $$E_{\theta_0}\varphi(X_1,\ldots,X_n)=\alpha$$
Now,
\begin{align} \lambda(x_1,\ldots,x_n)&=\frac{\exp\left(-\sum_{i=1}^n(x_i-\theta_1)\right)\mathbf1_{x_{(1)}>\theta_1}}{\exp\left(-\sum_{i=1}^n(x_i-\theta_0)\right)\mathbf1_{x_{(1)}>\theta_0}} \\\\&=e^{n(\theta_1-\theta_0)}\frac{\mathbf1_{x_{(1)}>\theta_1}}{\mathbf1_{x_{(1)}>\theta_0}} \\\\&=\begin{cases}e^{n(\theta_1-\theta_0)}&,\text{ if }x_{(1)}>\theta_1\\0&,\text{ if }\theta_0<x_{(1)}\le \theta_1\end{cases} \end{align}
So $\lambda(x_1,\ldots,x_n)$ is a monotone non-decreasing function of $x_{(1)}$, which means
$$\lambda(x_1,\ldots,x_n)\gtrless k \iff x_{(1)}\gtrless c$$, for some $c$ such that $$E_{\theta_0}\varphi(X_1,\ldots,X_n)=\alpha$$
We thus have
$$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }x_{(1)}>c\\0&,\text{ if }x_{(1)}<c\end{cases}$$
Again,
\begin{align} E_{\theta_0}\varphi(X_1,\ldots,X_n)&=P_{\theta_0}(X_{(1)}>c) \\&=\left(P_{\theta_0}(X_1>c)\right)^n \\&=e^{n(\theta_0-c)}\quad,\,c>\theta_0 \end{align}
So from the size condition we get $$c=\theta_0-\frac{\ln\alpha}{n}$$
Finally, the test function is
$$\varphi(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }x_{(1)}>\theta_0-\frac{\ln\alpha}{n}\\0&,\text{ if }x_{(1)}<\theta_0-\frac{\ln\alpha}{n}\end{cases}$$
In all hypothesis tests, we calculate a $p$-value based on a test statistic calculated from our sample. Usually, we determine a threshold (significance level) $\alpha$ such that if $p < \alpha$, we reject the null hypothesis $H_0$. Note that our $p$-value is calculated under the assumption that $H_0$ is true, and represents the probability of the given parameter (e.g. mean) of the sample being observed given $H_0$ is true, so $$P(\text{type I error}) = P(\text{reject }H_0 \text{ when } H_0 \text{ is true}) = \alpha$$ as we only reject $H_0$ when $p \leq \alpha$. The point to note here is that the probability of a type I error only depends on the significance level.
The probability of a type II error will depend on the population parameters and the sample size, since we make such an error when our observed sample fails to lie in what you have called $R_0$ - the $p$-values which will force us to reject $H_0$. In most hypothesis tests, the distribution of the test statistic calculated for a sample becomes more and more concentrated as we increase the sample size. For example, in hypothesis tests for the population mean, increasing the sample size decreases the standard deviation of the sample mean. Hence, the test is more sensitive to extreme values of the calculated test statistic and is more likely to give extreme $p$-values (upon which we would reject the null hypothesis). If you have a specific hypothesis testing procedure in mind, I would be happy to flesh out the details for that procedure for you.
The point to note here is that the probability of a type II does not only depend on the significance level, and in nearly all cases decreases with sample size. So one can simultaneously minimize the two errors by decreasing the significance level and increasing the sample size.
Best Answer
In the most basic definition of hypothesis testing (for example, the Neyman-Pearson Fundamental Lemma), there is an Acceptance region and a Rejection region for the data (or test statistic). Together the Acceptance and Rejection regions account for all possible experimental outcomes.
If the data fall in the Rejection region, $H_0$ is rejected; and if data fall in the Acceptance region, $H_0$ is accepted. (If you have philosophical difficulties with the word 'accepted', define it to mean 'not rejected' just to avoid double or triple negatives.)
Type I error is the probability of Rejecting $H_0$ when it is true; usually we say it's probability is $\alpha$. Type II error is the probability of Accepting $H_0$ when it is false; usually we say it's probability is $\beta$. Both $\alpha$ and $\beta$ depend on the definition of the Rejection region.
Within this framework it is easy to see the answer to your question. If the Rejection region is made more extensive, then $\alpha$ tends to increase. But this means that the Acceptance region must get smaller and $\beta$ tends to decrease.