[Math] the resulting $\sigma$ after applying successive gaussian blur

computer visionimage processing

From Wikipedia Gaussian Blur it said that

Applying multiple, successive Gaussian blurs to an image has the same effect as applying a single, larger Gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. For example, applying successive Gaussian blurs with radii of 6 and 8 gives the same results as applying a single Gaussian blur of radius 10, since $\sqrt{6^2+8^2}=10$.

But I can't find any proof for it, why does that hold?

And I have also found that in some code that people just consider applying two successive Gaussian blur with $\sigma_1$ and $\sigma_2$ respectively as just one blur with $\sigma=\sqrt{\sigma_1^2 + \sigma_2^2}$.

How can we prove this conclusion?

Best Answer

It can be shown using some simple convolution theory. First, recall that convolution is associative: $$ f\ast(g\ast h) = (f\ast g )\ast h $$

Next recall that Gaussian blurring an image $I$ is simply convolving a Gaussian kernel $G$ to it, where $$ G(x,y|\sigma)=(2\pi\sigma^2)^{-1}\exp\left( -\frac{x^2+y^2}{2\sigma^2} \right) $$ So Gaussian blurring twice is equivalent to convolving twice, so $$ I_B=G_1\ast (G_2\ast I)=(G_1\ast G_2)\ast I = G\ast I $$ where we know that $G$ is a Gaussian kernel because the convolution of two Gaussians is a Gaussian.

Now we just need to show that $$ G(x,y|\sigma) = G\left(x,y\;\middle|\sqrt{\sigma_1^2+\sigma_2^2}\right) = G_1(x,y|\sigma_1)\ast G_2(x,y|\sigma_2) $$ One way to do this is by definition: just compute $$ G(x,y|\sigma)=\iint_{-\infty}^\infty G_1(\tau,\xi|\sigma_1)\ast G_2(x-\tau,y-\xi|\sigma_2)\, d\tau\, d\xi $$ which will eventually be equal to the desired result (see e.g. here to see it done).

However, there is an easier way using some simple probability theory: Recall that the sum of two independent random variables gives a random variable with a density equal to the convolution of the two randomc variables.

So, if $A\sim\mathcal{N}(\mu_A,\sigma_A^2)$, $B\sim\mathcal{N}(\mu_B,\sigma^2_B)$ are independent, then $C = A+B$ has $C\sim\mathcal{N}(\mu_A+\mu_B,\sigma^2_A+\sigma^2_B)$.

The multivariate generalization is also true: Notice that if $X_1\sim \mathcal{N}(0,\sigma_1^2I_2)$, $X_2\sim \mathcal{N}(0,\sigma_2^2I_2)$, then they have density functions $G_1$ and $G_2$ respectively. Thus, the sum $Z = X_1 + X_2$ has a density function given by $G=G_1 \ast G_2$. But we know that $Z\sim \mathcal{N}(0+0, \sigma^2_1I_2+\sigma^2_2I_2)=\mathcal{N}(0, (\sigma^2_1 +\sigma^2_2)I_2)$. Thus, the density function of $Z$ is given by: \begin{align*} p_Z(z) &= \frac{1}{\sqrt{4\pi^2|\Sigma|}}\exp\left( -\frac{1}{2}(z-0)^T\Sigma^{-1}(z-0) \right) \\ &= \frac{1}{2\pi (\sigma^2_1 +\sigma^2_2) }\exp\left( -\frac{1}{2}\frac{z^Tz}{[\sigma_1^2+\sigma_2^2]} \right) \\ &= \frac{1}{2\pi (\sigma^2_1 +\sigma^2_2) }\exp\left( -\frac{x^2+y^2}{2[\sigma_1^2+\sigma_2^2]} \right) \\ &= G\left(x,y\;\middle|\sqrt{\sigma_1^2+\sigma_2^2}\right) \\ &=: G(x,y|\sigma) \end{align*} where $z=(x,y)$ and $|\Sigma|=|(\sigma^2_1 +\sigma^2_2)I_2|=(\sigma^2_1 +\sigma^2_2)^2$.

This shows that $ \sigma = \sqrt{\sigma_1^2+\sigma_2^2} $, as desired.

See also: [1], [2], [3], [4], [5].