[Math] why is the PDF of the sum of two continuous random variables the convolution of the PDF’s

intuitionprobabilityprobability distributionsrandom variables

I have taken a probability course last year but we didn't cover that notion.

I do know the steps in the discrete case. finding the support of $X + Y$… calculating the the probability of each element of the support… dressing a table. however this process is very intuitive and self-explanatory.

the convolution is very useful when taking inverse of product of Laplace/Fourier transforms which is why it's hard for me to think of an analogy between taking the inverse of integral and computing the PDF of the sum of two random variables.

I'd like to know the intuition behind :

$$f_Z(z)=\int^{\infty}_{-\infty}f(x,z-x)dx$$

or just $$f_Z(z) = \int_{- \infty}^{\infty} f_X(x)f_Y(z-x)\;dx $$

when they are independent

not a proof as I already found some on this site and elsewhere.

Best Answer

I don't know what proofs you've seen, but the basic idea is that, if $X+Y=Z$, then $Y = Z-X$. Then take all possible values for $X$, and add them up.

Let's pass into discrete probability, just to be simple. Let $X$ and $Y$ be discrete random variables, and $Z = X+Y$. Notice that $$ P(Z=z \mid X = x) = P(Y = z-x) $$ So $$ P(Z=z) = \sum_{x} P(Z = z \mid X= x) P(X=x) = \sum_x P(Y=z-x) P(X=x) $$ If you set $f_X(x) = P(X=x)$ and so on, this amounts to $$ f_Z(z) = \sum_{x} f_X(x) f_Y(z-x) $$

Related Question