For a), use the law of total probability:
$$
{\rm P}(X_1 < X_2 ) = \int_0^\infty {{\rm P}(X_1 < X_2 |X_2 = t)f_{X_2 } (t)\,{\rm d}t} ,
$$
where $f_{X_2}$ is the PDF of $X_2$.
For b), notice that $\min \{ X_1 ,X_2 \} > t$ if and only if $X_1 > t$ and $X_2 > t$ (and use the fact that $X_1$ and $X_2$ are independent).
For c), calculate ${\rm P}(\min \{X_1,X_2 \}>t , X_1 > X_2 )$ using the law of total probability, conditioning on $X_2$. You should easily find that
$$
{\rm P}(\min \{X_1,X_2 \}>t , X_1 > X_2 ) = {\rm P}(\min \{X_1,X_2 \}>t ){\rm P}(X_1 > X_2 ) = \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - (\lambda _1 + \lambda _2 )t}.
$$
For d), note that
$$
{\rm P}(|X_1 - X_2 | > t|N = 1) = \frac{{{\rm P}(X_2 - X_1 > t,X_1 < X_2 )}}{{{\rm P}(X_1 < X_2 )}} = \frac{{{\rm P}(X_2 > X_1 + t)}}{{{\rm P}(X_1 < X_2 )}},
$$
and you should easily show using the law of total probability, conditioning on $X_1$, that
$$
{P(X_2 > X_1 + t)} = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t}.
$$
Note: The calculation for ${\rm P}(|X_1 - X_2 | > t|N = 2)$ is completely analogous.
NOTE: Since question e) is not so easy, I give more than hints. However, try solving a significant part of it by yourself.
For e), it is straightforward to show, using that $N$ and $U$ are independent, that
$$
{\rm P}(W > t | U=u) = {\rm P}(W > t | N=1, U=u){\rm P}(N=1) + {\rm P}(W > t | N=2, U=u){\rm P}(N=2).
$$
For this purpose, you may replace $U=u$ by $U \in [u,u+{\rm d}u]$, where ${\rm d}u \to 0$, in order to condition on events with positive probability. Now, given $U=u$ and $N=1$, we have that $X_1 = u$ and that $X_2 - X_1$, by standard property of the exponential distribution, is exponential$(\lambda_2)$. Analogously, given $U=u$ and $N=2$, we have that $X_2 = u$ and that $X_1 - X_2$ is exponential$(\lambda_1)$. From this you should find that
$$
{\rm P}(W > t | U=u) = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t} + \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _1 t} .
$$
Now we are done by
$$
{\rm P}(W > t ) = {\rm P}(W > t ,N = 1) + {\rm P}(W > t ,N = 2),
$$
as it gives us, by virtue of a) and d),
$$
{\rm P}(W > t) = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t} + \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _1 t} .
$$
For Equation-1
$$
\lim_{n\to \infty}P(|Y_n|< \epsilon)=\lim_{n\to \infty}P(-\epsilon <Y_n< \epsilon)=\lim_{n\to \infty}P(0<Y_n< \epsilon)
$$
Since the distribution of $Y_n$ is $1-(1-x)^n$, where $x$ is the distribution of $X_i$ (uniform distribution $(0,1)$), there is
$$
\lim_{n\to \infty}P(0<Y_n< \epsilon)=\lim_{n\to \infty}(1-(1-\epsilon)^n)=1
$$
For Equation-2
$$
\lim_{n\to \infty}P(|Z_n-1|< \epsilon)=\lim_{n\to \infty}P(1-\epsilon <Z_n<1+ \epsilon)=\lim_{n\to \infty}P(1-\epsilon<Z_n< 1)
$$
Since the distribution of $Z_n$ is $x^n$, where $x$ is the distribution of $X_i$ (uniform distribution $(0,1)$), there is
$$
\lim_{n\to \infty}P(1-\epsilon<Z_n< 1)=\lim_{n\to \infty}(1-(1-\epsilon)^n)=1
$$
Best Answer
Hint: $$P(\max_{i\leq n} X_i \leq u) = \prod_{i=1}^nP(X_i\leq u)$$ This is because the max of a bunch of things less than $u$ is equivalent to all those things less than $u$. The product form happens because the variables are independent.
Can you work out the rest? If you need more help, let me know in comments.
Update: So basically you have to evaluate $(1-e^{-u})^n$ which happens to be the RHS above.
Put $u=a\log n$. You get $$\left(1-\frac{1}{n^a}\right)^n = \left(1-\frac{1}{n^a}\right)^{n^a(n/n^a)}$$
Now we know $(1-\frac{1}{n})^n\to e^{-1}$. Hence as $n^a\to \infty$, we get $(1-\frac{1}{n^a})^{n^a}\to e^{-1}$. Now $e^{-1} < 1$, so we have that for any $\delta>0$, for $n$ large enough $$e^{-1}-\delta\le \left(1-\frac{1}{n^a}\right)^{n^a} \leq e^{-1}+\delta$$ Here pick any $\delta$ such that $e^{-1}+\delta < 1$. Now $$0\leq \left(1-\frac{1}{n^a}\right)^{n^a(n/n^a)} \leq (e^{-1}+\delta)^{n^{1-a}}\to 0 $$
For the $b$ case, use a similar argument except you need the lower bound now. Just pick $\delta$ such that $e^{-1}+\delta>0$ and note that probabilities cannot exceed $1$.