I am wondering about the difference of the convolution of two probability density functions and the mixture of those two. This is not the same right? But what is the difference and how can it be explained? I know that this gives different pictures, but I did not really understand convolution and therefore I cannot explain the difference of it to myself.
[Math] difference between convolution of two densities and mixture density
convolutionmixing
Related Solutions
Well, may be I didn't really get the problem, but you wrote: "...A mathematical derivation would be helpful, also I have estimated both densities and I have the estimates for $\mu$ and $\sigma$...". Isn't that all you need to compute the skewness. You have the pdf of a mixture. Assuming that you know all the parameter, then just treat it like any other pdf (for example single Gaussian pdf). So in order to find skewness we need to compute first, second and third moments of a mixture. For constant (non-random) weights $\pi, (1-\pi)$, the pdf of the mixture is: $$f(x)=\sum_{i=1}^2\pi_i f_i(x),$$ where $f_i(x)=f(x|\mu_i,\sigma_i)$ So it follows immediately for any moment k:
$$\mu^{(k)} = \mathbb{E}_{f}[x^k] = \sum_i^2{\pi_i \mathbb{E}_{f_i}[x^k]} = \sum_i^2{\pi_i \mu_i^{(k)}}.$$
$\mu^{(k)}$ is the $k$-th moment of $f$ and $\mu_i^{(k)}$ is the $k$-th moment of $f_i$.
CORRECTION.
But we know all moments of the gaussian pdf.
Then $$\operatorname{E}_{f}\big[(X-\mu)^3\big]=\operatorname{E}_{f}\big[(X-\operatorname{E}_{f}\big[x\big])^3\big]=\operatorname{E}_{f}\big[x^3\big]-3\operatorname{E}_{f}\big[x\big]\operatorname{E}_{f}\big[x^2\big]+3\operatorname{E}_{f}\big[x\big]^2\operatorname{E}_{f}\big[x \big]-\operatorname{E}_{f}\big[x\big]^3$$
$$\operatorname{E}_{f}\big[(X-\operatorname{E}_{f}\big[x\big])^3\big]=\operatorname{E}_{f}\big[x^3\big]-3\operatorname{E}_{f}\big[x\big]\operatorname{E}_{f}\big[x^2\big]+2\operatorname{E}_{f}\big[x\big]^3=\mu^{(3)}-3\mu^{(1)}\mu^{(2)}+2(\mu^{(1)})^3$$
$$\operatorname{E}_{f}\big[(X-\operatorname{E}_{f}\big[x\big])^3\big]=\sum_i^2{\pi_i \mu_i^{(3)}}-3\sum_i^2{\pi_i \mu_i^{(1)}}\sum_i^2{\pi_i \mu_i^{(2)}}+2\bigg(\sum_i^2{\pi_i \mu_i^{(1)}}\bigg)^3$$
The moments of the gaussian pdf are defined as follows:
$$\mu_i^{(1)}=\mu_i$$
$$\mu_i^{(2)}=\mu_i^2+\sigma_i^2$$
$$\mu_i^{(3)}=\mu_i^3+3\mu_i\sigma_i^2$$
So after substituting them and simplifying the equation one will get:
$$\operatorname{E}_{f}\big[(X-\operatorname{E}_{f}\big[x\big])^3\big]=(1-\pi ) \pi \left(\mu _1-\mu _2\right) \left((1-2 \pi ) \left(\mu _1-\mu _2\right){}^2+3 \left(\sigma _1^2-\sigma _2^2\right)\right)$$
That was the numerator of your skewness. Working with the denominator in similar way one will get:
$$ \operatorname{E}\big[ (X-\mu)^2 \big]=\pi \left((1-\pi ) \left(\mu _1-\mu _2\right){}^2+\sigma _1^2-\sigma _2^2\right)+\sigma _2^2$$
As a result the skewness will look like:
$$\gamma_1 = \frac{\operatorname{E}\big[(X-\mu)^3\big]}{\ \ \ ( \operatorname{E}\big[ (X-\mu)^2 \big] )^{3/2}} =\frac{(1-\pi ) \pi \left(\mu _1-\mu _2\right) \left((1-2 \pi ) \left(\mu _1-\mu _2\right){}^2+3 \left(\sigma _1^2-\sigma _2^2\right)\right)}{\left(\pi \left((1-\pi ) \left(\mu _1-\mu _2\right){}^2+\sigma _1^2-\sigma _2^2\right)+\sigma _2^2\right){}^{3/2}}.$$
Hope this will help.
Yes, you can find it directly as:
$$\displaystyle \int_{-\infty}^{\infty} e^{- \tau}~u(\tau)~e^{-2(t-\tau)}~u(t-\tau)~d\tau = e^{-2t}~\int_0^t e^{\tau}~d\tau = e^{-2t}(e^t-1)$$
A plot shows:
When we have functions $f(t)u(t)$ and $g(t)u(t)$ with the Heaviside Unit Step Function, we can just write:
$$\displaystyle (f*g)(t) = \int_0^t f(\tau)~g(t-\tau) ~ d\tau$$
Having said all of that, I think it is very important to understand what is going on graphically. I recommend spending time with the examples, particulary $3.4.1$, Example $1$ as they solve the general example to yours and do it both ways. It is critical to understand the graphical method as it can keep you away from unrecognizable integrals.
This is also a useful Convolution Table. Especially review "Convolution using graphical method (1)".
Best Answer
Say you have two independent random variables $X$ and $Y$, $X$ has density $f$ and $Y$ has density $g$. The convolution $f * g$ is the density of $X + Y$ while the mixture $\frac 1 2 f + \frac 1 2 g$ is the density of $W X + (1 - W) Y$ where $W$ is a Bernoulli $\mathcal B(\frac 1 2)$ independent of $X$ and $Y$.