[Math] Understanding the c.d.f of $Y_n=\max\{X_1,..,X_n\}$ and $Y_1=\min\{X_1,..,X_n\}$

probabilityprobability distributions

Suppose that $X_1 , . . . , X_n$ form a random sample of size $n$ from the uniform distribution on the interval [0, 1]. Define a new random variable $Y_n=\max\{X_1,..,X_n\}$.

I know that i can compute the c.d.f. of $Y_n$ as follows
$$P(Y_n<y)=P(\max\{X_1,..,X_n\}<y)=\prod_{j=1}^n P(X_j<y)= [P(X_j<y)]^n$$

I saw in my book that the c.d.f. of $Y_1$ can be written as $P(Y_1<y)=[1-P(X_j<y)]^n$. But why? I know that the $\max\{\}$ and $\min\{\}$ applications are kind of inverse one each other. But in my mind $Y_n$ and $Y_1$ have the same distribution.

Why $P(Y_1<y)$ is not like this $P(Y_1<y)=P(\min\{X_1,..,X_n\}<y)=\prod_{j=1}^n P(X_j<y)$ ?

I don't realize the consequence of having $\max\{\}$ or $\min\{\}$.

Best Answer

Having $\max\{X_1, \dots, X_n\} \leq y$ is equivalent to having $X_1 \leq y$ and $X_2 \leq y$ and $\dots$ and $X_n \leq y$. This is why it's convenient to work with in terms of distributions; the fact that we can express this as several independent events separated by the word "and" lets us multiply. But, the same can't be said for the minimum.

What we could say for the minimum is: having $\min\{X_1, \dots, X_n\} \leq y$ is equivalent to have $X_1 \leq y$ or $X_2 \leq y$ or $\dots$ or $X_n \leq y$. However, the word "or" is not nearly as convenient as the word "and" when it comes to probabilities. So, we use a trick instead; we observe that having $\min\{X_1, \dots, X_n\} \geq y$ is equivalent to having $X_1 \geq y$ and $X_2 \geq y$ and $\dots$ and $X_n \geq y$. This is why there are $(1-)$ terms both inside and outside the CDF expressions you presented; we're dealing with the opposite thing typically encoded by a CDF.