I've been estimating continuous positive outcome Poisson regressions with the Huber/White/Sandwich linearized estimator of variance fairly frequently. However, that's not a particularly good reason to do anything, so here are some actual references.
From the theory side, $y$ does not need to be an integer for for the estimator based on the Poisson likelihood function to be consistent. This is shown in Gourieroux, Monfort and Trognon (1984). This is called Poisson PMLE or QMLE, for Pseudo/Quasi Maximum Likelihood.
There's also some encouraging simulation evidence from Santos Silva and Tenreyro (2006), where the Poisson comes in best-in-show. It also does well in a simulation with lots of zeros in the outcome. You can also easily do your own simulation to convince yourself that this works in your snowflake case.
Finally, you can also use a GLM with a log link function and Poisson family. This yields identical results and placates the count-data-only knee jerk reactions.
References Without Ungated Links:
Gourieroux, C., A. Monfort and A. Trognon (1984). “Pseudo Maximum Likelihood Methods: Applications to Poisson Models,” Econometrica, 52, 701-720.
As an opening comment, which you should not use to try to influence your management (especially given that you are an intern), Type I service level is almost always a terrible objective. What it measures is the probability that you suffer no stockouts over the leadtime, but, from a business perspective, what you'd like to measure is, far more often, the expected number of stockouts over the order cycle $/$ the expected demand over the order cycle, i.e., the fraction of demands that go unfilled. In some situations, e.g. stocking a helicopter for rescue missions where an out of stock could mean permanent disability or death, Type I service level is appropriate, but not in the usual business case. I won't go into more detail, since it's off-topic, but include it here for your future information.
The Type I service level given an order point of $s$ is nothing more than the probability of seeing fewer than $s$ demands over leadtime. As such, if you have a probability distribution of leadtime demand $F(x)$, and a target service level $\alpha$, the corresponding order point $s_{\alpha}$ is:
$$s_{\alpha} = F^{-1}_{LTD}(\alpha)$$
where $LTD$ refers to "leadtime demand". In the case of the Normal distribution, it so happens that this is the same as:
$$s_{\alpha} = \mu_{LTD} + \sigma_{LTD}*z_{\alpha}$$
where $\mu$ is the mean demand, and $\sigma$ is the standard deviation of demand. Where does $z_{\alpha}$ come from? It's the inverse of the standard Normal distribution's cumulative density function at $\alpha$:
$$z_{\alpha} = F^{-1}(\alpha | \mu=0, \sigma=1)$$
and
$$F^{-1}(\alpha | \mu_{LTD}, \sigma_{LTD}) = \mu_{LTD} + \sigma_{LTD} F^{-1}(\alpha|\mu=0,\sigma=1) $$
So using $z_{\alpha}$ with the Normal distribution is the same as calculating the inverse of the cumulative distribution of leadtime demand, where leadtime demand is Normally distributed.
When you aren't working with the Normal distribution, you won't typically have any equivalent formula to make life appear simpler. This is true of both the Poisson and Negative Binomial distributions. In the case of these two distributions, it's most straightforward just to calculate order point from the initial equation, using the appropriate parameters for the leadtime demand distribution in question.
ETA:
For example, assume your daily demand has mean $0.2$ units and standard deviation $0.6$ units, and you have a leadtime of one week. Then $\mu_{LTD} = 1.4$ and $\sigma_{LTD} = 1.59$. The parameters of the Negative Binomial distribution are $r = 1.75$ and $p = 0.556$.
If you want to stock to a Type I service level of $95\%$, you'd solve for:
$$s_{\alpha} = F^{-1}_{LTD}(\alpha) = F^{-1}(\alpha=0.95;r=1.75,p=0.556) = 5$$
and your order point would be $5$.
Best Answer
Theoretically, a normal distribution has a nonzero probability of negative numbers. So that's right out. A normal also has fully continuous distribution, whereas fumble rates would be discrete or rational.
It could be very close, and good enough, for example, the sum of many binomials (had a fumble or didnt with x% chance, summed across 100 games) approaches what looks like a normal bell curve.
People go to poisson because it is a discrete counting variable, with integer results defined from independent results; that is to say if each play had a consistent fumble probability, then over 100 plays the final outcome fumble count would be poisson distributed.
If there's any correlation within the ranks then it wont be any theoretical (clean) distribution. If for example having a lot of fumbles reduces your total number of plays in that game, then it's a self-correlated score and things get messy. I do believe if your first dozen plays all had a fumble (not likely but possible), then you might not get any more. It's definitely not an independent sum of even probabilities.
If the coach is allowed to remove a player who has had several fumbles, then the rate would decrease from that point on, another non-independence of the score.
The real observed distribution sure could look a lot like a normal in any event. Do you have any data we could play with?
EDIT: We see some data at this link: http://www.sharpfootballanalysis.com/blog/2015/the-new-eng;land-patriots-mysteriously-became-fumble-proof-in-2007 Thanks Affine for finding that.
And in that article the claim is made more explicitly: "Based on the assumption that plays per fumble follow a normal distribution, you’d expect to see, according to random fluctuation, the results that the Patriots have gotten since 2007 once in 5842 instances."
Which is a malformed hypothesis, you'd never care about the probability of an exact answer, the question of interest is how likely is any result this extreme OR HIGHER, combined. A point result has an extremely rare probability, but if there's a fat tail to the distribution, then perhaps more extreme results can happen, and the outlier event is really not so extreme. As this is an inverse distribution, Touches per Fumble, consider both variables as random poisson, you get so many touches per game and you see so many fumbles per game. The ratio will have a long tail, because it's possible to have many many touches with few fumbles. The outlier is to be expected, even looking at the previous decade's results, there was an outlier at 56 TpF which didn't get any comment from the blog author.