An explanation for the result that $X$, $1-Y$, and $Y-X$ have the same distribution in this case is as follows.
First, consider a plane with coordinate axes $u$ and $v$ and let
$(U,V)$ be a random point in the plane chosen according to some
joint density function $f_{U,V}(u,v)$. $U$ and $V$ need not be
independent random variables. Then,
$$\begin{align*}
P\{X > \alpha\} &= P\{\min(U,V) > \alpha\} = P\{U > \alpha, V > \alpha\},\\
P\{1-Y > \alpha\} &= P\{1-\max(U,V) > \alpha\} = P\{\max(U,V) < 1 - \alpha\}\\
&= P\{U < 1- \alpha, V < 1- \alpha\},\\
P\{Y-X > \alpha\} &= P\{\max(U,V)-\min(U,V) > \alpha\}\\
&= P\{U-V > \alpha\} + P\{V-U > \alpha\}.
\end{align*}$$
These three probabilities can be found in the general case by
integrating $f_{U,V}(u,v)$ over the appropriate region which can be
described in the three cases respectively as
the northeast quadrant of the plane with southwest corner
$(\alpha, \alpha)$
the southwest quadrant of the plane with northeast corner
$(1-\alpha, 1-\alpha)$
the half-plane below the line $v < u - \alpha$ and the half-plane above the line $v > u + \alpha$
So much for generalities. If the random point $(U,V)$ is uniformly
distributed on a region $A$ of the plane (that is,
$f_{U,V}(u,v)$ is nonzero and constant for $(u,v) \in A$,
$f_{U,V}(u,v) = 0$ for $(u,v) \notin A$) and $B$ is any
region of the plane, then
$$P\{(U,V) \in B\} = P\{(U,V) \in A\cap B\}
= \frac{\mathrm{Area}(A\cap B)}{\mathrm{Area}(A)}.$$
In particular, if we can compute areas via mensuration
formulas learned in school, we do not need to integrate
formally.
Finally, in the special case when $A$ is the unit-area square with
opposite corners $(0,0)$ and $(1,1)$, and $\alpha$ is a number between
$0$ and $1$,
$$\begin{align*}
P\{X > \alpha\} &= P\{U > \alpha, V > \alpha\}\\
&= P\{(U,V) \in ~\mathrm{square~with~opposite~corners}~ (\alpha,\alpha)
~ \mathrm{and}~ (1,1)\\
&= (1-\alpha)^2,\\
P\{1-Y > \alpha\} &= P\{U < 1- \alpha, V < 1- \alpha\}\\
&= P\{(U,V) \in ~\mathrm{square~with~opposite~corners}~ (0,0)
~ \mathrm{and}(1-\alpha,1-\alpha)~ \\
&= (1-\alpha)^2,\\
P\{Y-X > \alpha\} &= P\{U-V > \alpha\} + P\{V-U > \alpha\}\\
&= P\{(U,V) \in ~\mathrm{triangle~with~corners}~ (\alpha,0),
(1,1-\alpha) ~\mathrm{and}~(1,0)\}\\
&\quad \quad
+ P\{(U,V) \in ~\mathrm{triangle~with~corners}~ (0,\alpha),
(1-\alpha,1) ~\mathrm{and}~(0,1)\}\\
&= \frac{1}{2}(1-\alpha)^2 + \frac{1}{2}(1-\alpha)^2 = (1-\alpha)^2.\\
\end{align*}$$
So the complementary cumulative distribution of the three
random variables $X$, $1-Y$ and $Y-X$ is the same $(1-\alpha)^2$
in this case,
and so the three random variables have the same density function
$2(1-\alpha)$, $0 \leq \alpha \leq 1$.
You need to pull $P(U>s)$ from the original distribution, not from the limited range of $(t,1)$. Therefore, $P(U>s) = \frac {1-s}{1-0}$. It's the length of the segment $[s,1]$ divided by the length of the whole segment. We then divide by the probability of the condition $P(U>a) = \frac {1-a}{1-0}$ and you'll have your final answer.
The key thing in conditional probability is that we pull the probabilities from the original distribution, not the new distribution based on the condition. We're actually calculating the new distribution based on the condition.
A concrete example using the same range:
What's the probability that U is larger than $\frac 34$ given that U is larger than $\frac 12$?
We can tell (relatively intuitively) that this will be equal to $\frac 12$. $\frac 34$ is halfway between $\frac 12$ and $1$. But to calculate directly using the formula:
$$P(U>\frac34|U>\frac12) = \frac {P(U>\frac34)}{P(U>\frac12)}$$
From the original range, $P(U>\frac34) = \frac14$ and $P(U>\frac12) = \frac12$. So:
$$P(U>\frac34|U>\frac12) = \frac{\frac14}{\frac12} = \frac12$$
Again, just make sure you're pulling the top probability from the original distribution.
Best Answer
$[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the decimal point (note: this is not the same thing as rounding, for $[0.9]=0$ whereas rounding would give $1$.)
With non-smooth functions such as the floor function, the safest way to go is to use the cumulative distribution function, or CDF. For the uniform distribution this is given by:
$$F_{U}(y)=\Pr(U<y)=\int_{0}^{y}f_{U}(t)dt=\int_{0}^{y}dt=y$$
Now the good thing about CDFs is that you can simply substitute the functional relation in, but only once you have inverted the floor function. Now this inversion is not 1-to-1, so a standard change of variables using jacobian's doesn't apply. For example, suppose $X=0$. Then we know that $[nU]=0$, which means that $nU<1$, which implies that $U<n^{-1}$. We can work out this probability directly from the CDF:
$$\Pr(X=0)=\Pr(U<n^{-1})=F_{U}(n^{-1})=n^{-1}$$
The reason we can do this is that the two propositions " $X=0$ " and " $U<n^{-1}$ " are equivalent - one occurs if and only if the other occurs. So they must have the same "truth value" and hence also the same probability.
This is not too hard to continue on. Suppose $X=1$, then we must have $nU<2$ (or else $X>1$) and we must also have $nU>1$ (or else $X=0$ as we have just seen). So the equivalent condition to $X=1$ in terms of $U$ is $1<nU<2$. I'll stop my answer here so you can work out the general form of the probability mass function for $X$ ($\Pr(X=z)$ for general argument $z$).
One small hint is to note that $\Pr(a<U<b)=\Pr(U<b)-\Pr(U<a)=b-a$ for a uniform distribution.
I can post the full answer if you wish, but you may not learn as well compared to if you do it yourself.