You have the right idea to integrate against $y$ to find the $x$-marginal, and integrate against $x$ to find the $y$-marginal, but you've forgotten to pull out the other variable as a constant:
$$
f_1(x)=\int_0^2f(x,y)dy=\int_0^2\frac{3xy^2}{16}dy=\frac{3x}{16}\int_0^2y^2dy=\frac{3x}{16}\frac{8}{3}=\frac{x}{2}
$$
Similarly,
$$
f_2(y)=\int_0^2\frac{3xy^2}{16}dx=\frac{3y^2}{16}\int_0^2 xdx=\frac{3y^2}{16}\cdot 2=\frac{3y^2}{8}
$$
Firstly, when working with absolutely continuous distributions (those with densities), you refer to the density as the pdf (probability density function), not the pmf (probability mass function [used when there are "masses" at singletons]).
The change of variables formula tells us that
$$
f_{(U,V)}(u,v)=f_{(X,Y)}(x,y)|\det \frac{d(x,y)}{d(u,v)}| = f_X(x)f_Y(y)\cdot|-u|.
$$
where he we have used the fact that $f_{(X,Y)}=f_Xf_Y$ by independence of $X,Y$.
Write $x$ and $y$ in terms of $u,v$ and plug them into the $x,y$ in the above formula. You will find that $f_{(U,V)}$ will end up being the product of two familiar looking densities $h_1(u),h_2(v)$. It then immediately follows that $f_U = h_1, f_V = h_2$ and so $f_{(U,V)} = f_U f_V$ which tells us that $U,V$ are independent.
Can you run through the calculation and see what $h_1,h_2$ appear?
Edit: Note that $\frac{d(x,y)}{d(u,v)}=J(u,v)$ is meant to be evaluated at $(u,v)$.
Hint 1: $x=uv, y=u(1-v)$, and you correctly calculated $J(u,v)=-u$. Plugging into the densities I am getting
$$
f_{(U,V)}(u,v) = \left\{\begin{array}{cc}
&
\begin{array}{cc}
\frac{e^{-u} (u v)^{a-1} (u(1-v))^b}{(1-v) \Gamma (a) \Gamma (b)} & 0\leq uv,0\leq u(1-v) \\
0 & \text{else} \\
\end{array}
\\
\end{array}\right.
$$
Do you see what $h_1,h_2$ are now?
Best Answer
There is no joint density function since the random variable $(U,V,W,Z)$ takes values on a subset $D=\{(f_1^{-1}(x),f_2^{-1}(x),f_3^{-1}(x),f_4^{-1}(x))\mid x\in\mathbb R\}$ of $\mathbb R^4$ which has Lebesgue measure zero.
Informally, $D$ has co-dimension $3$, hence one can compare $D$ to a line in $\mathbb R^4$. Formally, for every measurable function $\varphi$ on $\mathbb R^4$, $$ \mathrm E(\varphi(U,V,W,Z))=\int\varphi(f_1(x),f_2(x),f_3(x),f_4(x))\,g(x)\mathrm dx, $$ where $g$ is the density of the distribution of $X$ hence $\mathrm E(\varphi(U,V,W,Z))$ is an integral on (a subset of) $\mathbb R$ instead of $\mathbb R^4$.
The simplest analogue is when $U=V=X$ with $X$ uniformly distributed on $[0,1]$. Then $(U,V)$ is uniformly distributed on the diagonal $\Delta=\{(x,x)\mid x\in[0,1]\}$ hence the distribution of $(U,V)$ is $$ \mathrm dP_{(U,V)}(u,v)=\mathbf 1_{u\in[0,1]}\,\delta_u(\mathrm dv)\,\mathrm du, $$ where, for every $u$, $\delta_u$ is the Dirac distribution at $u$. One sees that $\mathrm dP_{(U,V)}(u,v)$ has no density with respect to Lebesgue measure $\mathrm du\mathrm dv$.