[Math] divergence in image processing

calculusimage processingmachine learning

What is the difference between gradient and divergence?

I understood that gradient points in the direction of steepest ascent and divergence measures source strength. I couldn't relate this to the concept of divergence in image processing.

What is divergence in image processing, and how is it related to the gradient?

I have also asked a related question: https://dsp.stackexchange.com/questions/14606/anisotropic-diffusion. I couldn't understand the mathematics in it, but I roughly understood the theory. I need to understand the mathematics for the implementation.

I am trying to understand the equations in the above link: can you explain what can be concluded from these equations? How do they differ from each other?

understanding

Best Answer

The gradient is the directional rate of change of a scalar function in $\mathbb{R}^n$ whereas the divergence measures the amount of output vs input for a unit volume of a vector valued "flow" in $\mathbb{R}^n$.

The gradient has the magnitude of the rate of change in the direction of that change: $$ \nabla f(\vec{x})=\left\langle\frac{\partial}{\partial x_1}f,\frac{\partial}{\partial x_2}f,\dots,\frac{\partial}{\partial x_n}f\right\rangle $$ For example, the gradient of the distance from a given point is a vector field of unit length vectors pointing away from the given point.

Whereas the divergence is the measure of the amount of flow out of a given volume minus the amount of flow into a given volume: $$ \nabla\cdot\vec{f}(\vec{x})=\frac{\partial}{\partial x_1}f_1+\frac{\partial}{\partial x_2}f_2+\dots+\frac{\partial}{\partial x_n}f_n $$ For example, the divergence of a flow with no source or sink is $0$. If there is a net source, the divergence is positive and if there is a net sink the divergence is negative.


The Divergence of the Gradient

The $\color{#C00000}{\text{divergence}}$ of the $\color{#00A000}{\text{gradient}}$ is also called the $\color{#0000FF}{\text{Laplacian}}$: $$ \color{#C00000}{\nabla\cdot}\color{#00A000}{\nabla}=\color{#0000FF}{\Delta} $$ which is given by $$ \Delta f(\vec{x})=\frac{\partial^2}{\partial x_1^2}f+\frac{\partial^2}{\partial x_2^2}f+\dots+\frac{\partial^2}{\partial x_n^2}f $$ In one dimension, it is the second derivative. In higher dimensions it behaves in a similar manner: at a minimum point of $f$, $\Delta f\gt0$ and at a maximum point of $f$, $\Delta f\lt0$.

What your first equation says is that $$ \frac{\partial}{\partial t}I=c\Delta I $$ If $c\gt0$ then the diffusion is working to fill in depressions and tear down accumulations. If $c\lt0$, the diffusion has the opposite effect.

The isotropic diffusion acts the same (constant $c$) everywhere, whereas the anisotropic diffusion acts differently depending on the size of the gradient of the field.

Edge Stopping Function

The idea of an edge stopping function is to impede diffusion at an edge in an image (in a region where the magnitude of the gradient is large). That is, the function $g$ has a shape like

$\hspace{3.5cm}$diffusion coefficient

When the gradient is small, diffusion flows similarly to the isotropic case, but when the gradient is large (near an edge), diffusion stops. This allows detection of the edge.

Related Question