[Math] normalized Laplacian of Gaussian

derivativesimage processing

Laplacian of Gaussian formula for 2d case is
$$\operatorname{LoG}(x,y) = \frac{1}{\pi\sigma^4}\left(\frac{x^2+y^2}{2\sigma^2} – 1\right)e^{-\frac{x^2+y^2}{2\sigma^2}},$$ in scale-space related processing of digital images, to make the Laplacian of Gaussian operator invariant to scales, it is always said to normalize $LoG$ by multiplying $\sigma^2$, that is
$$\operatorname{LoG}_\text{normalized}(x,y) = \sigma^2\cdot \operatorname{LoG}(x,y) = \frac{1}{\pi\sigma^2}\left(\frac{x^2+y^2}{2\sigma^2} – 1\right)e^{-\frac{x^2+y^2}{2\sigma^2}}.$$ I wonder why multiply by $\sigma^2$ not $\sigma^4$ or anything else?

UPDATE

Thanks to comments from @achille. From the perspective of dimensional analysis, in the Laplacian of Gaussian operator
$$LoG(x,y,\sigma)=\frac{\partial^2g}{\partial x^2} +\frac{\partial^2g}{\partial y^2}$$, I think $x,y$ are variables with dimension $L$, $\sigma$ is a parameter with dimension $L$. But what about $g$? Since $g$ is a function of $x,y,\sigma$,
$$g(x,y,\sigma)=\frac{1}{2\pi \sigma^2}exp(-\frac{x^2+y^2}{2\sigma^2})$$,
and $x,y,\sigma$ are of the same dimension $L$, so I guess in $g$, the term $exp(-\frac{x^2+y^2}{2\sigma^2})$ is dimensionless, isn't it? And the term $\frac{1}{2\pi \sigma^2}$ is of dimension $L^{-2}$, right? So $g$ is actually of dimension $L^{-2}$, isn't it?

Now come back to $LoG$, it should have dimension $L^{-4}$?

UPDATE 2

Laplacian operator is
$$\nabla^2 = \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$$,
and a Gaussian function's scale is $\sigma$, right? If I apply $\nabla^2$ on a Gaussian function $g(x,y,\sigma)$, what is difference of applying the dimensionless $\sigma^2*\nabla^2$?

Best Answer

First, let me try to give you some intuition of why you have to normalize by scale at all. As you go from finer to coarser scales you blur the image. That makes the intensity surface more and more smooth. That, in turn, means that the amplitude of image derivatives gets smaller as you go up the scale volume. This is a problem for finding interest points, because you are looking for local extrema over scale. Without normalization you will always get the maximum at the finest scale and the minimum at the coarsest scale, and that's not what you want.

So, image derivatives are attenuated as $\sigma$ increases. In fact, the derivatives decrease exponentially as a function of $\sigma$. To compensate for that you have to normalize them by multiplying the $n$-th derivative by $\sigma^n$. Since the LoG is a combination of second derivatives, you have to multiply it by $\sigma^2$.

You can find the derivation and a better explanation of this in this paper by Toni Lindeberg.