As has been discussed elsewhere on this site, the deviance statistic does not have a $\chi^2$ distribution. Any statistic where the d.f. increases as the sample size has a degenerate distribution.
For goodness of fit set up easy directed hypotheses such as linearity and additivity or use the 1 d.f. test in the R rms
package residuals.lrm
function.
The deviance is a GLM concept, ZIP and ZINB models are not glms but are formulated as finite mixtures of distributions which are GLMs and therefore can be solved easily via EM algorithm.
These notes describe the theory of deviance concisely. If you read those notes you'll see the proof that the saturated model for the Poisson regression has log-likelihood
$$\ell(\lambda_s)= \sum_{i=1, \forall y_i\neq 0}^n \left[ y_ilog(y_i)-y_i -log(y_i!)\right]$$
which results from the plug-in estimates $y_i =\hat{\lambda}_i$.
I'll proceed now with the ZIP likelihood because the math is simpler, similar results hold for the ZINB. Unfortunately for the ZIP, there is no simple relationship like in the Poisson. The $i$th observations log-likelihood is
$$\ell_i(\phi, \lambda)=Z_ilog(\phi+(1-\phi)e^{-\lambda})+ (1-Z_i)\left[-\lambda +y_ilog(\lambda) -log(y_i!)\right].$$
the $Z_i$ are not observed so to solve this you'd need to take partial derivatives w.r.t. both $\lambda$ and $\phi$, set the equations to 0 and then solve for $\lambda$ and $\phi$. The difficulty here are the $y_i=0$ values, these can go into a $\hat{\lambda}$ or into a $\hat{\phi}$ and it isn't possible without observing $Z_i$ which to put the $y_i=0$ observations into. However, if we knew the $Z_i$ value we wouldn't need a ZIP model because we would have no missing data. The observed data corresponds to the "complete data" likelihood in the EM formalism.
One approach that might be reasonable is to work with the expectation w.r.t. $Z_i$ of the complete data log-likelihood, $\mathbb{E}(\ell_i(\phi, \lambda))$ which removes the $Z_i$ and replaces with an expectation, this is part of what the EM algorithm calculates (the E step) with the most recent updates. I'm unaware of any literature that has studied this approach to $expected$ deviance though.
Also, this question was asked first so I answered this post. However, there is another question on the same topic with a nice comment by Gordon Smyth here:
deviance for zero-inflated compound poisson model, continuous data (R)
where he mentioned the same response (this is an elaboration of that comment I'd say) plus they mentioned in the comments to the other post a paper which you may want to read. (disclaimer, I have not read the paper referenced)
Best Answer
The goodness-of-fit test based on deviance is a likelihood-ratio test between the fitted model & the saturated one (one in which each observation gets its own parameter). Pearson's test is a score test; the expected value of the score (the first derivative of the log-likelihood function) is zero if the fitted model is correct, & you're taking a greater difference from zero as stronger evidence of lack of fit. The theory is discussed in Smyth (2003), "Pearson's goodness of fit statistic as a score test statistic", Statistics and science: a Festschrift for Terry Speed.
In practice people usually rely on the asymptotic approximation of both to the chi-squared distribution - for a negative binomial model this means the expected counts shouldn't be too small. Smyth notes that the Pearson test is more robust against model mis-specification, as you're only considering the fitted model as a null without having to assume a particular form for a saturated model. I've never noticed much difference between them.
You may want to reflect that a significant lack of fit with either tells you what you probably already know: that your model isn't a perfect representation of reality. You're more likely to be told this the larger your sample size. Perhaps a more germane question is whether or not you can improve your model, & what diagnostic methods can help you.