The direct intuition (as signified by the adverb "obviously")
is that we cannot know either $X$ or $X^2$ a priori,
but once we find out the value of $X$, we know the exact value of $X^2$.
This is something that never happens with two non-trivial
independent random variables $X$ and $Y$.
(The only way that we could know the value of the independent variable $Y$ immediately after knowing $X$ is if we already knew the value of $Y$,
for example if $Y$ is constant.)
If it does not quack, it is not a duck.
A more formal argument is as follows.
If $X$ and $Y$ are independent, then for any sets $A$, $B$ in the
respective ranges of $X$ and $Y$,
$$ P((X \in A) \text{ and } (Y \in B)) = P(X \in A)\, P(Y \in B). $$
This is a consequence of the joint density function you tried to
compute, but you do not necessarily need to know everything about
the joint density function in order to evaluate this statement.
In particular, let $X \sim N(0, 1)$, $Y = X^2$, $A = [-1, 1]$, and $B = [0, 1]$.
Then $X \in [-1, 1]$ if and only if $Y = X^2 \in [0, 1]$,
that is, $X \in A$ occurs if and only if $Y \in B$ occurs,
so $Y \in B$ occurs if and only if both $X \in A$ and $Y \in B$ occur.
Therefore
$$P(X \in A) = P(Y \in B) = P((X \in A) \text{ and } (Y \in B)).$$
But $P(X\in A) = \int_{-1}^1 f_X(x) dx \approx 0.683 < 1$.
Therefore
$$ P(X \in A)\, P(Y \in B) < P(Y \in B).$$
But $P(Y \in B) = P((X \in A) \text{ and } (Y \in B))$, so
$$ P(X \in A)\, P(Y \in B) < P((X \in A) \text{ and } (Y \in B)).$$
Therefore $X$ and $Y = X^2$ are not independent.
The support comprises a triangle with vertices $(0,0)$, $(1,1)$, and $(-1,1)$. Thus, the proper calculation should show that $\operatorname{E}[X] = 0$, $\operatorname{E}[Y] = 2/3$, and $$\operatorname{E}[XY] = \int_{y=0}^1 \int_{x=-y}^y xy \, dx \, dy = 0.$$ Thus $$\operatorname{Cov}[X,Y] = 0$$ as claimed, but clearly $X$ and $Y$ are not independent.
Best Answer
It is not necessary to find these functions.
To prove dependency it is enough to find sets $A,B$ such that $$P(X\in A\wedge Y\in B)\neq P(X\in A)P(Y\in B)$$
To prove that the covariance is $0$ it is enough to show that $$\mathbb EXY=\mathbb EX\mathbb EY$$
and for that you do not need the PDF's either.
E.g. note that: $$\mathbb EXY=\int_0^1\sin2\pi z\cos2\pi z~\mathrm dz$$