You can sort of do it, but the discreteness of the negative binomial adds an extra complication if we don't know what the exact quantile that was calculated, since we can't get the exact quantile we ask for. In the case of sample quantiles, we'd need the exact definition of what that sample quantile is.

Let's start with assuming that the exact distribution - and hence the quantiles were known (rather than estimated) by whoever first calculated them.

With the beta, if I ask for the 97.5 percentile and the 2.5 percentile of a distribution (e.g. in R using `qbeta`

) then that interval actually contains 95% of the probability.

With discrete distributions, that is not the case. Often, but not always, people will quote conservative intervals that contain at least $(1-\alpha)$ (at least 95% for a 95% interval) - but even then there remains a question about whether or not the bounds are individually conservative or only conservative together. Usually the first is done, but not always.

So for example, consider $n=26, p=\frac{26}{76}\approx 0.3421053$. This has mean 50.

The 0.025 quantile is given by R as 29:

```
> qnbinom(.025,26,26/76)
[1] 29
```

The 0.975 quantile is given by R as 76:

```
> qnbinom(.975,26,26/76)
[1] 76
```

So if we have a negative binomial random variable $X$ that has $n=26$ and $p=26/76$ it must have $P(X\leq 29) = 0.025$ right? Well, no:

```
> pnbinom(29,26,26/76)
[1] 0.03072454
> pnbinom(76,26,26/76)
[1] 0.9772272
```

The interval [29,76] (note that we now include 29, so this won't be 0.9772-0.0307) contains not 95% but a bit more:

```
> pnbinom(76,26,26/76)-pnbinom(28,26,26/76)
[1] 0.9534341
```

Now the interval [29,75] contains 94.97%; many people would quite happily quote that as a 95% interval.

So if someone tells you that "31" is a 2.5 percentile, you can't simply take that number at face value -- it doesn't have 2.5% of the distribution at or below it. It might have more than 2.5%... and maybe sometimes a lot more. Or it might have a bit less, if they're not conservative.

The other main issue is that the quoted interval may not be an actual population interval but a sample interval, in which case (unless the sames were very large) we need to rely on the specific definition of sample quantile used (for example R offers *nine* definitions), and on the sampling behavior of those quantiles as well as the sampling behavior of the mean in negative binomial samples.

For cases where the mean is relatively large and the standard deviation relatively small, you can get it roughly via a normal approximation. If the interval is symmetric about the mean that's a good sign that this might work quite well. More accurately there are several gamma approximations to the negative binomial cdf that can be used to back out estimates.

This gives a starting value, but there will be a range of (n,p) values that would need to be considered by then referring back to the negative binomial itself.

So consider the given example, and assume those are population quantities rather than sample quantities: we're told that $\mu=50$ and that the 2.5% and 97.5% points are 31 and 69; both 19 units away from 50. If this were a normal distribution, that implies 1.96 standard deviations is 19 (or 19.5 if we used a continuity correction). This is pretty rough and the continuity correction may not actually help in this case so I'll leave it aside.

That gives $\sigma\approx 9.7$ (the continuity corrected value would be nearer to 10) and $\sigma^2\approx 94$.

Consequently $\mu/\sigma^2\approx 0.532$ should be an approximation for $p$ and $n$ should then be about $\mu/(1/p-1)=56.8$

Those values aren't quite right -- if we compute the quantiles using the negative binomial cdf we get 32 and 70 instead of 31 and 69, but we're in the right region.

If we assume that $\mu=50$ is known rather than estimated, and try a variety of values for $p$, we see that we can't actually achieve those quantiles at the same time:

Looking at that, the 56.8,0.532 combination isn't a bad compromise. Of course if it's instead the case that $\mu=50$ isn't actually known, but an estimate, then a different value of $\mu$ would get you close to those points, but under sampling the quantiles will typically have larger uncertainty than the mean.

To say much more would require more explicit details in the question about what, exactly we're dealing with.

As an opening comment, which you should not use to try to influence your management (especially given that you are an intern), Type I service level is almost always a terrible objective. What it measures is the probability that you suffer no stockouts over the leadtime, but, from a business perspective, what you'd like to measure is, far more often, the expected number of stockouts over the order cycle $/$ the expected demand over the order cycle, i.e., the fraction of demands that go unfilled. In some situations, e.g. stocking a helicopter for rescue missions where an out of stock could mean permanent disability or death, Type I service level is appropriate, but not in the usual business case. I won't go into more detail, since it's off-topic, but include it here for your future information.

The Type I service level given an order point of $s$ is nothing more than the probability of seeing fewer than $s$ demands over leadtime. As such, if you have a probability distribution of leadtime demand $F(x)$, and a target service level $\alpha$, the corresponding order point $s_{\alpha}$ is:

$$s_{\alpha} = F^{-1}_{LTD}(\alpha)$$

where $LTD$ refers to "leadtime demand". In the case of the Normal distribution, it so happens that this is the same as:

$$s_{\alpha} = \mu_{LTD} + \sigma_{LTD}*z_{\alpha}$$

where $\mu$ is the mean demand, and $\sigma$ is the standard deviation of demand. Where does $z_{\alpha}$ come from? It's the inverse of the standard Normal distribution's cumulative density function at $\alpha$:

$$z_{\alpha} = F^{-1}(\alpha | \mu=0, \sigma=1)$$

and

$$F^{-1}(\alpha | \mu_{LTD}, \sigma_{LTD}) = \mu_{LTD} + \sigma_{LTD} F^{-1}(\alpha|\mu=0,\sigma=1) $$

So using $z_{\alpha}$ with the Normal distribution is the same as calculating the inverse of the cumulative distribution of leadtime demand, where leadtime demand is Normally distributed.

When you aren't working with the Normal distribution, you won't typically have any equivalent formula to make life appear simpler. This is true of both the Poisson and Negative Binomial distributions. In the case of these two distributions, it's most straightforward just to calculate order point from the initial equation, using the appropriate parameters for the leadtime demand distribution in question.

ETA:
For example, assume your daily demand has mean $0.2$ units and standard deviation $0.6$ units, and you have a leadtime of one week. Then $\mu_{LTD} = 1.4$ and $\sigma_{LTD} = 1.59$. The parameters of the Negative Binomial distribution are $r = 1.75$ and $p = 0.556$.

If you want to stock to a Type I service level of $95\%$, you'd solve for:

$$s_{\alpha} = F^{-1}_{LTD}(\alpha) = F^{-1}(\alpha=0.95;r=1.75,p=0.556) = 5$$

and your order point would be $5$.

## Best Answer

Since $(1+\frac{\hat{R}}{k})^{-k}$ is a decreasing function of $k$, the function $\hat{p}_0 -(1+\frac{\hat{R}}{k})^{-k}$ will have at most one zero. However, when the lower asymptote of $(1+\frac{\hat{R}}{k})^{-k}$ is larger than $\hat{p}_0$, no zero will be found. Here's some simple R code