You have to look at the structure of how a floating point number is stored. In IEEE 754 a 64 bit float is defined to have a sign bit $S$, $11$ bits of exponent $E$, and $53$ bits of mantissa $M$. The first bit of the mantissa is always $1$ and not stored, so only $52$ bits are stored. The value is $(-1)^S \cdot 2^{E-1023}\cdot M$ where $M$ is in binary with the radix point after the first (unstored) bit. This gives that there are $2^{52}$ values of $M$, so for each value of exponent we get $2^{52}$ floating point numbers.
To count the numbers between $n_1=0.009999999999999999999999999999$ and $n_2=
0.099999999999999999999999999999$ we first count the full steps of the exponent, then handle the partial steps at the end. $n_1 \lt 2^{-6}=0.015625$ and $n_2\gt 2^{-4}=0.0625$ so we get $3 \cdot 2^{52}$ numbers there. In the step where the exponent part is $2^{-7}$ the $\epsilon$ is $2^{-59}$ so there are $2^{59}(0.015625-n_1)=3242591731706757$. In the step where the exponent is $2^{-3}$ the $\epsilon$ is $2^{-55}$, so there are $2^{55}(n_2-0.0625)=13510798882111487$. If my copy and paste skills are working the total is $30264189495929732$ I think the rounding is correct. I don't care to try to list them all. It is definitely not the same as from $0$ to $0.09$ because that has over $1000$ steps of a factor two, so over $1000 \cdot 2^{52}$ numbers. In floating point the numbers are very dense near zero. You are adding a span near zero, so adding lots of numbers and not deleting nearly as many since the deleted span is "far from zero".
If you have the right tool available, you could just get the binary representation of $n_1$ and $n_2$ and subtract them. As the floating point representations are ordered the same way the corresponding values are, that will give you the count. Remember to subtract one because you shouldn't count either end.
Modern Computer Arithmetic suggests using an arithmetic-geometric mean algorithm. I'm not sure if this approach is meant for the low amount of precision one typically works with or if its meant for calculation in very high precision.
Another approach is to observe that the Taylor series for $\ln(x)$ is efficient if $x$ is very close to $1$. We can use algebraic identities to reduce the general case to this special case.
One method is to use the identity
$$ \ln(x) = 2 \ln(\sqrt{x})$$
to reduce the calculation of $\ln(x)$ to that of an argument closer to 1. We could use a similar identity for more general radicals if we can compute those efficiently.
By iteratively taking roots until we get an argument very close to $1$, we can reduce to
$$ \ln(x) = m \ln(\sqrt[m]{x})$$
which can be computed by the Taylor series.
If you store numbers in mantissa-exponent form in base 10, an easy identity to exploit is
$$ \ln(m \cdot 10^e) = e \ln(10) + \ln(m)$$
so the plan is to precompute the value of $\ln(10)$, and then use another method to obtain $\ln(m)$, where $m$ is not large or small.
A similar identity holds in base 2, which a computer is likely to use.
A way to use lookup tables to accelerate the calculation of $\ln(x)$ when $x$ is not large or small is to observe that
$$ \ln(x) = \ln(k) + \ln(x/k) $$
The idea here is that you store a table of $\ln(k)$ for enough values of $k$ so that you can choose the $k$ nearest $x$ to make $x/k$ very near $1$, and then all that's left is to compute $\ln(x/k)$.
Best Answer
Calculators are computers, too; they're just smaller. Surely if we knew how to represent arbitrary real numbers inside calculators, we could do the same thing with desktop computers.
That said, it's possible—both on a calculator and on a computer—to represent some real numbers exactly. No computer I know of would represent $\frac12$ inexactly, since its binary expansion (0.1) is short enough to put inside a floating point register. More interestingly, you can also represent numbers like $\pi$ exactly, simply by storing them in symbolic form. In a nutshell, instead of trying to represent $\pi$ as a decimal (or binary) expansion, you just write down the symbol "$\pi$" (or, rather, whatever symbol the computer program uses for $\pi$).