Suppose that X is distributed Poisson with a known rate and Y is a normal distributed with a know mean and variance. My goal is to approximate the distribution Z where P(Z) = P(X) * P(Y), where Z is a non-negative integer. I could get a good approximation by sampling, but I'd really like to have a fast solution, ideally closed-form.
Solved – Distribution of Product of Normal and Poisson
approximationnormal distribution
Related Solutions
Suppose $X$ is Poisson with parameter $\lambda$, and $Y$ is normal with mean and variance $\lambda$. It seems to me that the appropriate comparison is between $\Pr(X = n)$ and $\Pr(Y \in [n-\frac12,n+\frac12])$. Here for simplicity I write $n = \lambda + \alpha \sqrt\lambda$, that is, we are interested when $n$ corresponds to $\alpha$ standard deviations from the mean.
So I cheated. I used Mathematica. So both $\Pr(X = n)$ and $\Pr(Y \in [n-\frac12,n+\frac12])$ are asymptotic to $$ \frac 1{\sqrt{2\pi \lambda}} e^{-\alpha^2/2} $$ as $\lambda \to \infty$. But their difference is asymptotic to $$ \frac{\alpha \left(\alpha ^2-3\right) e^{-\alpha ^2/{2}}}{6 \sqrt{2 \pi } \lambda } $$ If you plot this as a function of $\alpha$, you will get the same curve as is shown in the second to last figure in http://www.johndcook.com/blog/normal_approx_to_poisson/.
Here are the commands I used:
n = lambda + alpha Sqrt[lambda];
p1 = Exp[-lambda] lambda^n/n!;
p2 = Integrate[1/Sqrt[2 Pi]/Sqrt[lambda] Exp[-(x-lambda)^2/2/lambda], {x, n-1/2, n+1/2}];
Series[p1, {lambda, Infinity, 1}]
Series[p2, {lambda, Infinity, 1}]
Also, with a bit of experimentation, it seems to me that a better asymptotic approximation to $\Pr(X = n)$ is $\Pr(Y \in [n-\alpha^2/6,n+1-\alpha^2/6])$. Then the error is $$ -\frac{\left(5 \alpha ^4-9 \alpha ^2-6\right) e^{-{\alpha ^2}/{2}} }{72 \sqrt{2 \pi } \lambda ^{3/2} } $$ which is about $\sqrt\lambda$ times smaller.
The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is the set $\{0,1,2,\ldots,n\}$, which is finite, where $n$ is the number of trials.
The negative binomial distribution is the distribution of the number of failures before a fixed (i.e. not random) number of successes, again with independent trials and the same probability of success on each trial. Its support is the set $\{0,1,2,3,\ldots\}$, which is infinite.
The Poisson distribution can be loosely characterized as the number of successes in an infinite number of independent trials with an infinitely small probability of success on each trial, in which the expected number of successes is some fixed positive number. It is a limit of the binomial distribution in which the number of trials approaches $\infty$ and the probability of success on each trial approaches $0$ in such a way that the expected number of successes remains constant or at least approaches some positive number.
It is true that for the binomial distribution the mean is larger than the variance, for the negative binomial distribution the mean is smaller than the variance, and for the Poisson distribution they are equal.
But it is not true that for every distribution whose support is some set of cardinal numbers, if the mean equals the variance then it is a Poisson distribution, nor that if the mean is greater than the variance it is a binomial distribution, nor that if the mean is less than the variance it is a negative binomial distribution. For example, the mean of the hypergeometric distribution that arises from sampling without replacement is greater than the variance, as with the binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,1,2,\ldots,n\}$, if $n>4$ then the variance is greater than the mean, as with the negative binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,2\}$, the variance is equal to the mean, as with the Poisson distribution, but the distribution is not the same.
If $X\sim\mathrm{Poisson}(\lambda)$ then $$ \frac{X-\lambda}{\sqrt\lambda} \overset{\text{D.}} \longrightarrow N(0,1) \text{ as } \lambda\to\infty $$ because when $\lambda$ is large, the distribution of $X$ is the same as the distribution of the sum of a large number of Poisson distributed random variables whose sum is near $1$. That is because the sum of independent Poisson-distributed random variables is Poisson distributed, so the central limit theorem can be applied.
If $X\sim\mathrm{Binomial}(n,p)$ then $$ \frac{X-np}{\sqrt{np(1-p)}} \overset{\text{D.}}\longrightarrow N(0,1) \text{ as } n \to \infty $$ because $X$ has the same distribution as the sum of $n$ independent random variables distributed as $\mathrm{Binomial}(1,p)$, so again the central limit theorem applies.
The negative binomial distribution with parameters $r,p$ is the distribution of the number of failures before the $r$th success, with probability $p$ of success on each trial. If $X$ is so distributed then we have $$ \frac{X- (pr/(1-p)) }{\sqrt{pr}/(1-p)} \overset{\text{D.}} \to N(0,1) \text{ as } r\to\infty $$ because $X$ has the same distribution as the sum of $r$ independent random variables distributed as negative binomial with parameters $1,p$, so again the central limit theorem applies.
When approximating any of these kinds of distributions with a normal distribution, note that the even $[X\le n]$ is the same as the event $[X<n+1]$, so use the continuity correction in which you find the probability that $[X\le n+\frac 1 2]$ according to the normal distribution.
Best Answer
There is one book dedicated to the problem of products of random variables: http://www.amazon.com/Products-Random-Variables-Applications-Arithmetical/dp/0824754026/ref=sr_1_1?s=books&ie=UTF8&qid=1383564424&sr=1-1&keywords=product+of+random+variables
Maybe you can find it in a library. (Or search google scholar with the author names)
There is a connection between products of independent random variables and the Mellin transform, see the paper: "Some Applications of the Mellin Transform in Statistics" by Benjamin Epstein, which is on JSTOR. There is a Wikipedia article on the Mellin Transform, and search google scholar for "Mellin transform product of random variables" gives some relevant papers.