A joint distribution has domain $(-\infty, \infty) \times (-\infty, \infty)$. If we partition each component of the cartesian product in two by selecting some value $x$ and some value $y$, then we get $4$ subsets,
$$(-\infty, x] \times (-\infty, y],\;\;(-\infty, x] \times [y,\infty),\\
[x, \infty) \times (-\infty, y],\;\;[x, \infty) \times [y,\infty)$$
made up of intersections of two events,
$$A = P(X\le x), \;\; B = P(Y\le y)$$
and their corresponding complements.
Then (as the OP noted in a commnent),
$$\Pr(X\ge x, Y\ge y) = P(A^c\cap B^c) = 1 - P(A\cup B)$$
$$=1-\big[P(A) + P(B) - P(A\cap B)\big]$$
So it appears that by taking the cross-partial derivative of $\Pr(X\ge x, Y\ge y)$ we should again get the joint density. Let's verify that:
$$\Pr(X\ge x, Y\ge y) = \int_x^{\infty}\int_y^{\infty}f(s,t)dtds$$
$$\frac {\partial \Pr(X\ge x, Y\ge y)}{\partial y} = \int_x^{\infty} \left(\frac{\partial}{\partial y}\int_y^{\infty}f(s,t)dt\right)ds $$
$$=\int_x^{\infty}-f(s,y) ds$$
$$\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x} = \frac {\partial }{\partial x} \int_x^{\infty}-f(s,y) ds = -\left(-f(x,y)\right) = f(x,y)$$
The above also means that we can obtain the joint pdf from any of the four joint events indicated by the breakdown of the support -but in the other two cases, we should multiply by $-1$.
$$\begin{align} f(x,y) =& \frac {\partial^2 \Pr(X\le x, Y\le y)}{\partial y\partial x}\\
=&\frac {\partial^2 \Pr(X\ge x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\le x, Y\ge y)}{\partial y\partial x}\\
=&-\frac {\partial^2 \Pr(X\ge x, Y\le y)}{\partial y\partial x}
\end{align}$$
As indicated in the earlier comments, once you get a sample from the joint distribution of $(X_1,X_2,X_3)$,
$$(x_1^1,x_2^1,x_3^1),\ldots,(x_1^t,x_2^t,x_3^t)$$
the marginal sample
$$(x_1^1,x_2^1),\ldots,(x_1^t,x_2^t)$$is indeed a sample from the marginal joint distribution of $(X_1,X_2)$ and you can ignore the simulated $x_3^j$'s. They can however be useful in Monte Carlo evaluations through a technique called Rao-Blackwellisation since the average$$\frac{1}{t}\sum_{i=1}^t h(x_1^i,x_2^i)$$is improved by the average$$\frac{1}{t}\sum_{i=1}^t \mathbb{E}[h(x_1^i,x_2^i)|x_3^i]$$as a conditional expectation shares the same expectation as the original but reduces the variance. See for instance this discussion on Cross Validated.
Best Answer
If $X$ and $Y$ are independent random variables, then $$F_{X,Y}(x,y)=\int_{-\infty}^x \int_{-\infty}^y f_{X,Y}(w,v)\,dv\,dw = \int_{-\infty}^x \int_{- \infty}^{y} f_X(w)f_Y(v)\,dv\,dw$$ $$=\int_{-\infty}^x f_X(w)\,dw\int_{-\infty}^{y}f_Y(v)\,dv = F_X(x)F_Y(y).$$
Method 1 (joint pdf approach) gives: $$f_{X,Y}(x,y)=f_X(x)f_Y(y)=xy,$$ if $0\leq x \leq 2, 0\leq y \leq 1$ and zero otherwise. Then $$F_{X,Y}(x,y)=\int_0^x \int_0^yf_{X,Y}(w,v)\,dv\,dw=\int_0^x \int_0^y wv \, dv \, dw=\frac{x^2y^2}{4}.$$
Method 2 (marginal cdf approach) gives: $$F_X(x) = \int_0^x \frac{w}{2} \, dw =\frac{x^2}{4}$$ $$F_Y(y) = \int_0^y 2v \, dv =y^2.$$ It follows $$F_{X,Y}(x,y)=F_X(x)F_Y(y)=\frac{x^2y^2}{4}.$$
I should note the cdf's are also piecewise functions: $F_X(x)$ and $F_Y(y)$ take a value of zero if $x \leq 0$ and $y \leq 0$, respectively, and take a value of 1 if $2 \leq x$ and $1 \leq y$, respectively. $F_{X,Y}(x,y)$ takes a value of zero if either $x \leq 0$ or $y \leq 0$, and takes a value of $1$ if $2 \leq x$ and $1 \leq y$.