Your claimed result is not true, which probably explains why you're having trouble seeing it.
For simplicity I'll let $a = 0, b = 1$. Results for general $a$ and $b$ can be obtained by a linear transformation.
Let $X_1, \ldots, X_n$ be independent uniform $(0,1)$; let $Y$ be their minimum and let $X$ be their maximum. Then the probability that $X \in [x, x+\delta x]$ and $Y \in [y, y+\delta y]$, for some small $\delta x$ and $\delta y$, is
$$ n(n-1) (\delta x) (\delta y) (x-y)^{n-2} $$
since we have to choose which of $X_1, \ldots, X_n$ is the smallest and which is the largest; then we need the minimum and maximum to fall in the correct intervals; then finally we need everything else to fall in the interval of size $x-y$ in between. The joint density is therefore $f_{X,Y}(x,y) = n(n-1) (x-y)^{n-2}$.
Then the density of $Y$ can be obtained by integrating. Alternatively, $P(Y \ge y) = (1-y)^n$ and so $f_Y(y) = n(1-y)^{n-1}$.
The conditional density you seek is then
$$ f_{X|Y}(x|y) = {n(n-1) (x-y)^{n-2} \over n(1-y)^{n-1}} == {(n-1) (x-y)^{n-2} \over (1-y)^{n-1}}. $$
where of course we restrict to $x > y$.
For a numerical example, let $n = 5, y = 2/3$. Then we get $f_{X|Y}(x/y) = 4 (x-2/3)^3 / (1/3)^4 = 324 (x-2/3)^3$ on $2/3 \le x \le 1$. This is larger near $1$ than near $2/3$, which makes sense -- it's hard to squeeze a lot of points in a small interval!
The result you quote holds only when $n = 2$ -- if I have two IID uniform(0,1) random variables, then conditional on a choice of the minimum, the maximum is uniform on the interval between the minimum and 1. This is because we don't have to worry about fitting points between the minimum and the maximum, because there are $n - 2 = 0$ of them.
What is the definition of the independence of two random variables? In other words, how can you mathematically determine whether two random variables are independent?
Your intuition for part (i) is correct, but you need to figure out why. The beginning step is to answer the above question.
For part (ii), suppose I generated a realization of $(X_1, X_2)$ according to the distribution specified in this part of the question. Without telling you $X_1$, I tell you that $X_2 = 0.95$. What information does this convey about the possible value of $X_1$? For example, could $X_1 = 0.5$ if $X_2 = 0.95$? Why or why not? What does this suggest about whether $X_1$ and $X_2$ are independent?
Best Answer
Short answer: Yes, you are correct.
But you asked for rigor... I'll try to make the rationale rigorous for your edification :P Hopefully I don't end up confusing you, but rather send you on an adventure to learn more formal probability theory.
Let $(\Omega, \mathscr{F}, P)$ be our underlying probability space (meaning all random variables we discuss here are assumed to be $\mathscr{F}$-measurable functions of $\omega \in \Omega$).
Consider the following random variable $X: \Omega \to \mathbb{R}^2$, $$X = \begin{bmatrix}X_1 \\ X_2\end{bmatrix}$$
Notice that the components of $X$ are also random variables, $X_1: \Omega \to \mathbb{R}$ and $X_2: \Omega \to \mathbb{R}$.
Let the probability density function (PDF) of $X$ be called $f(x) = f(x_1,x_2)$, and let the PDFs of its components $X_1$ and $X_2$ be called $f_1(x_1)$ and $f_2(x_2)$ respectively. We define a conditional PDF as, $$f_1(x_1 | x_2) := \frac{f(x_1, x_2)}{f_2(x_2)}$$ When we say "$X_1$ and $X_2$ are independent" we strictly mean, $$f_1(x_1 | x_2) \equiv f_1(x_1)$$
So like you said, $$f(x_1, x_2) = f_1(x_1)f_2(x_2)$$
If $f_1(x_1)$ is uniform on $[0,2]$ then, $$f_1(x_1) := \begin{cases} \frac{1}{2}, & x_1 \in [0,2] \\ 0, & \text{else} \end{cases}$$ and similarly, $f_2(x_2)$ uniform on $[1,2]$ means, $$f_2(x_2) := \begin{cases} 1, & x_2 \in [1,2] \\ 0, & \text{else} \end{cases}$$
So we must have, $$f(x_1,x_2) = f_1(x_1)f_2(x_2) = \begin{cases} (\frac{1}{2})(1), & x_1 \in [0,2]\ \cap\ x_2 \in [1,2] \\ (\frac{1}{2})(0), & x_1 \in [0,2]\ \cap\ x_2 \not\in [1,2] \\ (0)(1), & x_1 \not\in [0,2]\ \cap\ x_2 \in [1,2] \\ (0)(0), & x_1 \not\in [0,2]\ \cap\ x_2 \not\in [1,2] \\ \end{cases}$$
which can be expressed more simply as, $$f(x_1,x_2) = \begin{cases} \frac{1}{2}, & x_1, x_2 \in [0,2]\times[1,2] \\ 0, & \text{else} \end{cases}$$
It is good to verify that your result is indeed a valid PDF, $$\iint_{\mathbb{R}^2}f(x)dx = \int_1^2 \int_0^2\frac{1}{2}dx_1dx_2 = 1$$