Solved – How to fit a discrete distribution to count data

computational-statisticsdiscrete datanegative-binomial-distributionpoisson distributionr

I have the following histogram of count data. And I would like to fit a discrete distribution to it. I am not sure how I should go about this. enter image description here

Should I first superimpose a discrete distribution, say Negative Binomial distribution, on the histogram so that I would obtain the parameters of the discrete distribution and then run a Kolmogorov–Smirnov test to check the p-values?

I am not sure if this method is correct or not.

Is there a general method to tackle a problem like this?

This is a frequency table of the count data. In my problem, I am only focusing on non-zero counts.

  Counts:     1    2    3    4    5    6    7    9   10 
 Frequency: 3875 2454  921  192   37   11    1    1    2 

UPDATE: I would like to ask: I used the fitdistr function in R to obtain the parameters for fitting the data.

fitdistr(abc[abc != 0], "Poisson")
     lambda  
  1.68147852 
 (0.01497921)

I then plot the probability mass function of Poisson distribution on top of the histogram. enter image description here

However, it seems like the Poisson distribution fails to model the count data. Is there anything I can do?

Best Answer

Methods of fitting discrete distributions

There are three main methods* used to fit (estimate the parameters of) discrete distributions.

1) Maximum Likelihood

This finds the parameter values that give the best chance of supplying your sample (given the other assumptions, like independence, constant parameters, etc)

2) Method of moments

This finds the parameter values that make the first few population moments match your sample moments. It’s often fairly easy to do, and in many cases yields fairly reasonable estimators. It’s also sometimes used to supply starting values to ML routines.

3) Minimum chi-square

This minimizes the chi-square goodness of fit statistic over the discrete distribution, though sometimes with larger data sets, the end-categories might be combined for convenience. It often works fairly well, and it even arguably has some advantages over ML in particular situations, but generally it must be iterated to convergence, in which case most people tend to prefer ML.

The first two methods are also used for continuous distributions; the third is usually not used in that case.

These by no means comprise an exhaustive list, and it would be quite possible to estimate parameters by minimizing the KS-statistic for example – and even (if you adjust for the discreteness), to get a joint consonance region from it, if you were so inclined. Since you’re working in R, ML estimation is quite easy to achieve for the negative binomial. If your sample were in x, it’s as simple as library(MASS);fitdistr (x,"negative binomial"):

> library(MASS) 
> x <- rnegbin(100,7,3)
> fitdistr (x,"negative binomial")
     size         mu    
  3.6200839   6.3701156 
 (0.8033929) (0.4192836)

Those are the parameter estimates and their (asymptotic) standard errors.

In the case of the Poisson distribution, MLE and MoM both estimate the Poisson parameter at the sample mean.

If you'd like to see examples, you should post some actual counts. Note that your histogram has been done with bins chosen so that the 0 and 1 categories are combined and we don't have the raw counts.

As near as I can guess, your data are roughly as follows:

    Count:  0&1   2   3   4   5   6  >6    
Frequency:  311 197  74  15   3   1   0

But the big numbers will be uncertain (it depends heavily on how accurately the low-counts are represented by the pixel-counts of their bar-heights) and it could be some multiple of those numbers, like twice those numbers (the raw counts affect the standard errors, so it matters whether they're about those values or twice as big)

The combining of the first two groups makes it a little bit awkward (it's possible to do, but less straightforward if you combine some categories. A lot of information is in those first two groups so it's best not to just let the default histogram lump them).


* Other methods of fitting discrete distributions are possible of course (one might match quantiles or minimise other goodness of fit statistics for example). The ones I mention appear to be the most common.