I wouldn't rule out the lognormal as an approximation for discrete positive variables. Last time I checked the populations of the countries of the world fit a lognormal distribution quite well and population is naturally discrete.
But judging from a glance at the example paper you cite, those researchers are not fitting to discrete variables at all. The applications are to inundation depth, current velocity and hydrodynamic force, which are all continuous, so that looks uncontroversial in principle. How well the lognormal fits in practice is a different question. In the broad field you mention, environmental hazards, I have a paper under review that shows that the lognormal fits some continuous data far better than the power law distribution suggested by some previous workers. I suspect that is common.
How best to fit the lognormal is a different and key question. Fitting by binning and least-squares is a fairly lousy method and there are better methods. For example, the easiest way is just to take logarithms of the raw data and calculate the mean and standard deviation of a normal and then exponentiate. There are many other ways to do it, but binning just discards detail in the data and raises small and indeed large questions about how far the fit depends on arbitrary decisions made about bin width (and sometimes bin origin).
I've seen elsewhere the practice of ignoring points with associated cumulative probability 0 and 1 and (as you imply) omitting data just because a quantile function cannot be evaluated is utterly indefensible. The use of plotting positions such as (rank - 0.5) / sample size has been the standard way to avoid such difficulties for more than a century. (Trimming extreme values to impart some resistance would be a quite different and possibly defensible practice.)
I think you are confusing variables that are discrete in principle and the binning of continuous variables as a matter of supposed convenience or necessity. Binning continues to have some uses (e.g. histograms remain popular), but using and plotting all the data as they arrive was always better in principle and is now almost always not difficult in computational practice.
Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rational.
It could be very close, and good enough, for example, the sum of many binomials (had a fumble or didnt with x% chance, summed across 100 games) approaches what looks like a normal bell curve.
People go to poisson because it is a discrete counting variable, with integer results defined from independent results; that is to say if each play had a consistent fumble probability, then over 100 plays the final outcome fumble count would be poisson distributed.
If there's any correlation within the ranks then it wont be any theoretical (clean) distribution. If for example having a lot of fumbles reduces your total number of plays in that game, then it's a self-correlated score and things get messy. I do believe if your first dozen plays all had a fumble (not likely but possible), then you might not get any more. It's definitely not an independent sum of even probabilities.
If the coach is allowed to remove a player who has had several fumbles, then the rate would decrease from that point on, another non-independence of the score.
The real observed distribution sure could look a lot like a normal in any event. Do you have any data we could play with?
EDIT: We see some data at this link:
http://www.sharpfootballanalysis.com/blog/2015/the-new-eng;land-patriots-mysteriously-became-fumble-proof-in-2007 Thanks Affine for finding that.
And in that article the claim is made more explicitly: "Based on the assumption that plays per fumble follow a normal distribution, you’d expect to see, according to random fluctuation, the results that the Patriots have gotten since 2007 once in 5842 instances."
Which is a malformed hypothesis, you'd never care about the probability of an exact answer, the question of interest is how likely is any result this extreme OR HIGHER, combined. A point result has an extremely rare probability, but if there's a fat tail to the distribution, then perhaps more extreme results can happen, and the outlier event is really not so extreme. As this is an inverse distribution, Touches per Fumble, consider both variables as random poisson, you get so many touches per game and you see so many fumbles per game. The ratio will have a long tail, because it's possible to have many many touches with few fumbles.
The outlier is to be expected, even looking at the previous decade's results, there was an outlier at 56 TpF which didn't get any comment from the blog author.
Best Answer
Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.