I want to know the relationship between binomial and geometic distribution.

I know the distribution both have two outcome and probability of success is the same for both distribution.

# Solved – Relationship between the binomial and the geometric distribution

binomial distributiondistributionsgeometric-distribution

#### Related Solutions

In a Binomial $\mathcal{B}(n,p)$ distribution, if $$X\sim\mathcal{B}(n,p)$$ is the number of successes, $$Y=n-X$$ is the number of failures. Therefore, $$\text{Corr}(X,Y)=-\text{Corr}(X,X)=-1$$

Then, since we know $$\text{Cov}(X,Y)=\text{Corr}(X, Y)\text{Stdev}(X)\text{Stdev}(Y)$$ and $$\text{Stdev}(X)=\text{Stdev}(Y)$$ we can calculate the covariance as $$\text{Cov(X, Y)}=-Var(X)=-np(1-p)$$

The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is the set $\{0,1,2,\ldots,n\}$, which is finite, where $n$ is the number of trials.

The negative binomial distribution is the distribution of the number of failures before a fixed (i.e. not random) number of successes, again with independent trials and the same probability of success on each trial. Its support is the set $\{0,1,2,3,\ldots\}$, which is infinite.

The Poisson distribution can be loosely characterized as the number of successes in an infinite number of independent trials with an infinitely small probability of success on each trial, in which the expected number of successes is some fixed positive number. It is a limit of the binomial distribution in which the number of trials approaches $\infty$ and the probability of success on each trial approaches $0$ in such a way that the expected number of successes remains constant or at least approaches some positive number.

It is true that for the binomial distribution the mean is larger than the variance, for the negative binomial distribution the mean is smaller than the variance, and for the Poisson distribution they are equal.

**But** it is **not** true that for every distribution whose support is some set of cardinal numbers, if the mean equals the variance then it is a Poisson distribution, **nor** that if the mean is greater than the variance it is a binomial distribution, **nor** that if the mean is less than the variance it is a negative binomial distribution. For example, the mean of the hypergeometric distribution that arises from sampling without replacement is greater than the variance, as with the binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,1,2,\ldots,n\}$, if $n>4$ then the variance is greater than the mean, as with the negative binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,2\}$, the variance is equal to the mean, as with the Poisson distribution, but the distribution is not the same.

If $X\sim\mathrm{Poisson}(\lambda)$ then $$ \frac{X-\lambda}{\sqrt\lambda} \overset{\text{D.}} \longrightarrow N(0,1) \text{ as } \lambda\to\infty $$ because when $\lambda$ is large, the distribution of $X$ is the same as the distribution of the sum of a large number of Poisson distributed random variables whose sum is near $1$. That is because the sum of independent Poisson-distributed random variables is Poisson distributed, so the central limit theorem can be applied.

If $X\sim\mathrm{Binomial}(n,p)$ then $$ \frac{X-np}{\sqrt{np(1-p)}} \overset{\text{D.}}\longrightarrow N(0,1) \text{ as } n \to \infty $$ because $X$ has the same distribution as the sum of $n$ independent random variables distributed as $\mathrm{Binomial}(1,p)$, so again the central limit theorem applies.

The negative binomial distribution with parameters $r,p$ is the distribution of the number of failures before the $r$th success, with probability $p$ of success on each trial. If $X$ is so distributed then we have $$ \frac{X- (pr/(1-p)) }{\sqrt{pr}/(1-p)} \overset{\text{D.}} \to N(0,1) \text{ as } r\to\infty $$ because $X$ has the same distribution as the sum of $r$ independent random variables distributed as negative binomial with parameters $1,p$, so again the central limit theorem applies.

When approximating any of these kinds of distributions with a normal distribution, note that the even $[X\le n]$ is the same as the event $[X<n+1]$, so use the continuity correction in which you find the probability that $[X\le n+\frac 1 2]$ according to the normal distribution.

## Best Answer

Binomial distribution describes the number of successes $k$ achieved in $n$ trials, where probability of success is $p$. Negative binomial distribution describes the number of successes $k$ until observing $r$ failures (so

anynumber of trials greater then $r$ is possible), where probability of success is $p$. Geometric distribution is a special case of negative binomial distribution, where the experiment is stopped at first failure ($r=1$). So while it is not exactly related to binomial distribution, it is related to negative binomial distribution.If you are looking to learn more about the probability distributions you can check the

Statistics 110: Probabilitylectures by Joe Blitzstein from Harvard University that are freely available online.