Not precisely.
About histograms, KDEs and ECDFs.
(1) Roughly speaking, a histogram (on a density scale so that the sum of areas of bars is unity) can be viewed as a estimate of the density function. A KDE is a more sophisticated method of density estimation. Generally speaking one cannot reconstruct the exact values of the data for either a histogram or a KDE.
(2) By contrast an empirical CDF (ECDF) retains exact information about all of the data. An ECDF is made as
follows: (a) sort the data from smallest to largest, (b) make a stair-step function that begins at 0 below the
minimum and increases by $1/n$ at each data value, where $n$ is the sample size. If $k$ values are tied then the increase is $k/n$ at the tied value.
Thus the ECDF approximates the CDF of the distribution,
with increasingly accurate approximations for samples of increasing size. Generally speaking an ECDF gives a better approximation to the population CDF than a histogram gives for the density function. (Information
is lost in binning data to make a histogram.)
[By suitable manipulation (a kind of numerical integration), information in a KDE could be used to make a function that imitates
the population CDF, but it does not use the actual data values. In my experience, this is rarely done.]
Graphical illustrations.
(1) A sample of size $n = 100$ from $$\mathsf{Gamma}(\text{shape} = \alpha = 5,\,\text{rate} = \lambda = 1/6)$$ is simulated. The figure shows a density histogram (blue bars), the default KDE from R statistical software (red curve), and the population density function (black).
set.seed(930)
x = rgamma(100, 5, 1/6)
summary(x)
hist(x, prob=T, ylim=c(0,.035),
col="skyblue2", main="n = 100")
rug(x) # tick marks below x-axis
lines(density(x), lwd=2, lty="dotted", col="red")
curve(dgamma(x, 5, 1/6), add=T)
(2) Sampling from the same distribution, we show the ECDF for a sample of size $n = 20,$ so that the
steps are easy to see.
set.seed(2019)
x = rgamma(20, 5, 1/6)
plot(ecdf(x), main="n = 20", col="blue"); rug(x)
curve(pgamma(x, 5, 1/6), add=T, lwd=2)
Even if $f$ is continuous, the function $F$ can be continuous in each variable without being totally differentiable. So you can't do the derivative thing.
But you can do this : think of $F(A)$ as the "measure" of $A$ with respect to $F$. Then, $F(x,y)$ is the "measure" of $(-\infty,x] \times (-\infty,y]$ with respect to $F$.
Now, what does the density $f(x,y)$ at a point $(x_0,y_0)$ hint at, or mean? It means, that if I take a very small region $V$ containing $F(x,y)$, the "measure" with respect to $F$ of $V$ should be $f(x,y)$ times the area of $V$ as a geometrical region of the plane.
In particular, if I take rectangles around $(x_0,y_0)$, then $f(x,y)$ times the area of these rectangles, should be the "measure" of these rectangles with respect to $F$.
Suppose I have a rectangle $[x_0-\epsilon,x_0 + \epsilon] \times [y_0- \epsilon,y_0 + \epsilon]$ around $(x_0,y_0)$. Its area is clearly $4 \epsilon^2$ (the usual formula : product of sides).
What is its "measure" under $F$? For this, draw a diagram , and convince yourself that the "measure" of $[x_0-\epsilon,x_0 + \epsilon] \times [y_0- \epsilon,y_0 + \epsilon]$ is equal to :
$$
F(x_0+\epsilon, y_0+\epsilon) - F(x_0+\epsilon,y_0-\epsilon) - F(x_0-\epsilon,y_0+\epsilon) + F(x_0-\epsilon,y_0-\epsilon)
$$
To see this, interpret each term in the sum above in terms of the region that they are the $F$-measure of. Add/subtract regions which overlap based on their sign , and you will see that only the rectangle around $(x_0,y_0)$ remains.
Therefore, the result is, or at least should be :
$$
f(x,y) = \lim_{\epsilon \to 0}\frac{F(x_0+\epsilon, y_0+\epsilon) - F(x_0+\epsilon,y_0-\epsilon) - F(x_0-\epsilon,y_0+\epsilon) + F(x_0-\epsilon,y_0-\epsilon)}{4 \epsilon^2}
$$
Use everything you know about derivatives, like the FTC etc. to see this. Note that we don't require differentiability of $F$, but only something weaker which is implied by its form (partial derivatives).
However, IF $F$ is once differentiable with continuous derivative in each variable, then you can show that $$
f(x_0,y_0) = \frac{\partial^2 F}{\partial x \partial y} (x_0,y_0)
$$
In any case, the RHS depends only on $F$ : so you can find $F$ assuming the RHS limit exists at each point (and it does exist almost everywhere due to one-dimensional CDF monotonicity).
Best Answer
Every probability distribution on $\mathbb R$ is associated to a cumulative distribution function (and every non-decreasing function $F$ with $\inf_x F(x)=0$ and $\sup_x F(x)=1$ is associated to a distribution!). The quoted text is probably using "with" to refer to the whole phrase "a cumulative distribution function that is absolutely continuous."
For the fact that every probability distribution does define a cumulative distribution function, just note that defining, for a probability measure $\mu$ on $\mathbb R$, the CDF as $$F(x)=\mu((-\infty,x))$$ gives a perfectly good CDF - although there's a little bit of ambiguity about what you should do at point masses where $(-\infty,x)$ and $(-\infty,x]$ have different measure - but this depends on what you want from a CDF and is a very manageable caveat.
Conversely, given a non-decreasing function on $\mathbb R$, you can define a measure with the property that $$\mu((a,b))=\left(\lim_{x\rightarrow b^-}F(x)\right) - \left(\lim_{x\rightarrow a^+}F(x)\right).$$ I won't give all the technical details of this construction, but this measure is called the Lebesgue-Stieltjes measure associated to $F$.
There's a slight issue in this association having to do with point-masses, but you can fix that by imposing conditions like right-continuity on $F$ which basically decide whether $F(x)$ includes a point mass at $x$ or not.