Every random variable $X$ has a distribution function $F(x)=\mathbb P(X\leq x)$, by definition. Also, by definition, if we're dealing with continuous random variables, we have:
$$
F(x)=\mathbb P(X\leq x)=\int_{-\infty}^xf(t)dt,
$$
where $f$ (non-negative, continuous) is our density function. Therefore,
$$
f(x)=\frac{d}{dx}F(x),
$$
where we're applying the Fundamental Theorem of Calculus.
Since your distribution function is defined in cases, you have to calculate the derivative in cases too. I will show one case: $F(x)=0$ for $x<-1$. Thus we have: $\frac{d}{dx}F(x)=0$ for $x<-1$. You should finish the other two cases, and then write out $f(x)$ in the same way as $F(x)$ has been written out, i.e. by using cases.
Even if $f$ is continuous, the function $F$ can be continuous in each variable without being totally differentiable. So you can't do the derivative thing.
But you can do this : think of $F(A)$ as the "measure" of $A$ with respect to $F$. Then, $F(x,y)$ is the "measure" of $(-\infty,x] \times (-\infty,y]$ with respect to $F$.
Now, what does the density $f(x,y)$ at a point $(x_0,y_0)$ hint at, or mean? It means, that if I take a very small region $V$ containing $F(x,y)$, the "measure" with respect to $F$ of $V$ should be $f(x,y)$ times the area of $V$ as a geometrical region of the plane.
In particular, if I take rectangles around $(x_0,y_0)$, then $f(x,y)$ times the area of these rectangles, should be the "measure" of these rectangles with respect to $F$.
Suppose I have a rectangle $[x_0-\epsilon,x_0 + \epsilon] \times [y_0- \epsilon,y_0 + \epsilon]$ around $(x_0,y_0)$. Its area is clearly $4 \epsilon^2$ (the usual formula : product of sides).
What is its "measure" under $F$? For this, draw a diagram , and convince yourself that the "measure" of $[x_0-\epsilon,x_0 + \epsilon] \times [y_0- \epsilon,y_0 + \epsilon]$ is equal to :
$$
F(x_0+\epsilon, y_0+\epsilon) - F(x_0+\epsilon,y_0-\epsilon) - F(x_0-\epsilon,y_0+\epsilon) + F(x_0-\epsilon,y_0-\epsilon)
$$
To see this, interpret each term in the sum above in terms of the region that they are the $F$-measure of. Add/subtract regions which overlap based on their sign , and you will see that only the rectangle around $(x_0,y_0)$ remains.
Therefore, the result is, or at least should be :
$$
f(x,y) = \lim_{\epsilon \to 0}\frac{F(x_0+\epsilon, y_0+\epsilon) - F(x_0+\epsilon,y_0-\epsilon) - F(x_0-\epsilon,y_0+\epsilon) + F(x_0-\epsilon,y_0-\epsilon)}{4 \epsilon^2}
$$
Use everything you know about derivatives, like the FTC etc. to see this. Note that we don't require differentiability of $F$, but only something weaker which is implied by its form (partial derivatives).
However, IF $F$ is once differentiable with continuous derivative in each variable, then you can show that $$
f(x_0,y_0) = \frac{\partial^2 F}{\partial x \partial y} (x_0,y_0)
$$
In any case, the RHS depends only on $F$ : so you can find $F$ assuming the RHS limit exists at each point (and it does exist almost everywhere due to one-dimensional CDF monotonicity).
Best Answer
The cdf $F_X$ always exists for every random variable, which is not necessarily true of the density. So, yes, technically it is always possible to obtain the density from the cdf, if the density exists.
In general, there are many different ways to characterize a probability distribution, and some of them are more tractable than others depending on the situation. In certain situations, it may be easiest to characterize the density by, e.g., deriving the moment-generating function of the random variable. For example, if you find that the mgf corresponds to the mgf of a normal distribution, then you immediately know that this random variable must have a normal density, without having to explicitly derive the pdf from the mgf.
As an example, if $\left(N_t\right)$ is a Poisson process, then I can derive the fact that the time $T_1$ of the first arrival is exponentially distributed in the following way:
$$P\left(T_1 > t\right) = P\left(N_t = 0\right) = e^{-\lambda t}.$$
To get the final equality I am only using the fact that $N_t\sim Poisson\left(\lambda t\right)$. But through the first equality I have found that $T_1$ has the same cdf as an exponential distribution with rate $\lambda$. Since the cdf completely characterizes the distribution, I now automatically know the density of $T_1$.