Your claimed result is not true, which probably explains why you're having trouble seeing it.
For simplicity I'll let $a = 0, b = 1$. Results for general $a$ and $b$ can be obtained by a linear transformation.
Let $X_1, \ldots, X_n$ be independent uniform $(0,1)$; let $Y$ be their minimum and let $X$ be their maximum. Then the probability that $X \in [x, x+\delta x]$ and $Y \in [y, y+\delta y]$, for some small $\delta x$ and $\delta y$, is
$$ n(n-1) (\delta x) (\delta y) (x-y)^{n-2} $$
since we have to choose which of $X_1, \ldots, X_n$ is the smallest and which is the largest; then we need the minimum and maximum to fall in the correct intervals; then finally we need everything else to fall in the interval of size $x-y$ in between. The joint density is therefore $f_{X,Y}(x,y) = n(n-1) (x-y)^{n-2}$.
Then the density of $Y$ can be obtained by integrating. Alternatively, $P(Y \ge y) = (1-y)^n$ and so $f_Y(y) = n(1-y)^{n-1}$.
The conditional density you seek is then
$$ f_{X|Y}(x|y) = {n(n-1) (x-y)^{n-2} \over n(1-y)^{n-1}} == {(n-1) (x-y)^{n-2} \over (1-y)^{n-1}}. $$
where of course we restrict to $x > y$.
For a numerical example, let $n = 5, y = 2/3$. Then we get $f_{X|Y}(x/y) = 4 (x-2/3)^3 / (1/3)^4 = 324 (x-2/3)^3$ on $2/3 \le x \le 1$. This is larger near $1$ than near $2/3$, which makes sense -- it's hard to squeeze a lot of points in a small interval!
The result you quote holds only when $n = 2$ -- if I have two IID uniform(0,1) random variables, then conditional on a choice of the minimum, the maximum is uniform on the interval between the minimum and 1. This is because we don't have to worry about fitting points between the minimum and the maximum, because there are $n - 2 = 0$ of them.
For any $i=1,\ldots,n$, we have, using the Total Law of Expectation, conditioning on the value of $U_i$,
\begin{eqnarray*}
&& \\
E(U_i\vert \max\{U_1,..,U_n\}=t) &=& E(U_i\vert U_i=t\cap \max\{U_1,..,U_n\}=t)P(U_i=t\vert \max\{U_1,..,U_n\}=t) \\
&& + E(U_i\vert U_i\neq t\cap \max\{U_1,..,U_n\}=t)P(U_i\neq t\vert \max\{U_1,..,U_n\}=t) \\
&& \\
&=& t\cdot \dfrac{1}{n}\; + \;\dfrac{t+1}{2}\cdot \dfrac{n-1}{n}.
\end{eqnarray*}
By the linearity of expectation,
\begin{eqnarray*}
E(U_1+\cdots+U_n\mid \max\{U_1,..,U_n\}=t) &=& \sum_{i=1}^n{E(U_i\mid \max\{U_1,..,U_n\}=t)} \\
&& \\
&=& \sum_{i=1}^n{\left(t\cdot \dfrac{1}{n}\; + \;\dfrac{t+1}{2}\cdot \dfrac{n-1}{n}\right)} \\
&& \\
&=& t + (n-1)\dfrac{t+1}{2}.
\end{eqnarray*}
Note:
This result is intuitive: one value of $t$ and the remaining $n-1$ values taking the average value in interval $(-1,t)$.
Best Answer
As indicated in the comments, a useful idea when maxima and minima are involved is to consider well adapted events. Here, introducing $Z=\min\{X,Y\}$ and $W=\max\{X,Y\}$, one sees that $[z\leqslant Z,W\leqslant w]$ is $[z\leqslant X\leqslant w]\cap[z\leqslant Y\leqslant w]$ for every nonnegative $z$ and $w$ such that $z\leqslant w$. Here is a computation: since the probability that a standard exponential random variable is $\geqslant x$ is $\mathrm e^{-x}$ for every nonnegative $x$, the events $[z\leqslant X\leqslant w]$ and $[z\leqslant Y\leqslant w]$ both have probability $\mathrm e^{-z}-\mathrm e^{-w}$. Hence, $$ \mathrm P(z\leqslant Z,W\leqslant w)=(\mathrm e^{-z}-\mathrm e^{-w})^2. $$ Differentiating this with respect to $z$ and $w$ yields the density of $(Z,W)$ as $$ 2\mathrm e^{-z-w}\cdot[0\leqslant z\leqslant w]. $$ This formula is all right but, because of the indicator functions in it, I am afraid to make mistakes when using it, so I try to simplify it. Let $V=W-Z$, then $Z\geqslant0$, $V\geqslant 0$, and using $v=w-z$, the density becomes $$ 2\mathrm e^{-z-(v+z)}\cdot[0\leqslant z\leqslant v+z]=2\mathrm e^{-2z}\cdot[z\geqslant 0]\cdot\mathrm e^{-v}\cdot[v\geqslant0]. $$ This proves that $Z$ and $V$ are independent with $Z$ exponential of parameter $2$ and $V$ of parameter $1$ and yields at last the answer to the initial question as $$ \mathrm E(W\mid Z)=\mathrm E(V+Z\mid Z)=\mathrm E(V)+Z=1+Z. $$ The same technique yields that the order statistic $(X^{(k)})_{1\leqslant k\leqslant n}$ of an i.i.d. sample $(X_k)_{1\leqslant k\leqslant n}$ of standard exponential random variables, defined by the conditions that $\{X^{(1)},X^{(2)},\ldots,X^{(n)}\}=\{X_1,X_2,\ldots,X_n\}$ and that $X^{(1)}<X^{(2)}<\cdots <X^{(n)}$, is distributed like $(Z_1,Z_1+Z_2,\ldots,Z_1+Z_2+\cdots+Z_n)$ for independent exponential random variables $(Z_k)_{1\leqslant k\leqslant n}$ such that the distribution of $Z_k$ is exponential with parameter $n-k+1$. A consequence is that, for every $1\leqslant k\leqslant\ell\leqslant n$, $$ \mathrm E(X^{(\ell)}\mid X^{(k)})=X^{(k)}+\sum\limits_{i=n-\ell+1}^{n-k}\frac1i. $$