[Math] How to numerically calculate a function from its noisy gradient using “global integration”

inverse-problemsMATLABnumerical linear algebranumerical methodspartial differential equations

I have the model $\ s(x,y)=x^2+y^2, 0 \leq x \leq 1, 0 \leq y \leq 1 $.

Instead of observing the model directly I am observing the derivatives of the model + some noise (e):

$\ p(x,y)=s_x+e, q(x,y)=s_y+e$

From measurements of p(x,y and q(x,y) I want to estimate s(x). Say I know that s(0,0)=0.

According to the gradient theorem:
$\ s(x,y) = \int_{(0,0)}^{(x,y)} [s_x,s_y]d\vec{r}$

regardless along which path we integrate.

Unfortunately the gradient theorem does not hold in this case because of the noise,
as can be shown by a Matlab experiment.

Instead one is supposed to use global integration methods where one instead tries to choose a S(x,y) that minimizes:

$\ \int_0^1\int_0^1 [|s_x – p|^2 + |s_y – q|^2] dxdy$

How do I do this in practice (with my example equation)?

EDIT:

One approach that I have used is this:

first we introduce linear derivation operators: $\ s_x = D_x*s,s_y = D_y*s$.

The result is the following linear equation system:

$\ D_x*s=p, D_y*s=q$

Next find a Least Square Error solution to these equations.
A LSE solution to these equations is supposed to be equivalent to minimizing the integral from above.
How can this be shown?

The results are good but there are some drawbacks to this approach:

  1. it is difficult to implement; must use central differences and "flatten" 2D s matrix into vector etc.

  2. The derivation matrices are very large and sparse so they may consume a lot of RAM.

Another approach that seems potentially both simpler to code, less RAM consuming and faster is to use FFT. In the Fourier space these pdes become an algebraic equation.
This is known as the Frankot-Chellappa algorithm, but unfortunately I have not got it to work on my example data.

Best Answer

I should explain some math behind my comment

any minimizing function $s$ will satisfy the Poisson equation $\Delta s=p_x+q_y$, so if you happen to know the boundary values of $s$ (or can obtain them by integration of $p,q$ along the edges), you can resort to existing Poisson solvers

Let $\mathcal H$ be the space of all vector fields* $\langle p, q\rangle$ with $L^2$ components $p,q$ defined on $\Omega=[0,1]\times [0,1]$. Let $E\subset \mathcal H$ be the set of all gradient fields; this is a linear subspace. Given $\langle p, q\rangle$ in $\mathcal H$, we seek the element $\nabla s\in E$ with the smallest distance to $\langle p, q\rangle$ -- the distance is the square root of the integral $\int_0^1\int_0^1 [|s_x - p|^2 + |s_y - q|^2] dxdy$ that you want to minimize. On the geometrical grounds, $\nabla s$ is the orthogonal projection of $\langle p, q\rangle$ onto $E$, which means that $\vec f:=\nabla s-\langle p, q\rangle$ lies in the orthogonal complement of $E$. The space $E^\perp$ consists of divergence-free vector fields. Indeed, if $\vec f\in E^\perp$ then for any smooth function $\varphi$ we have (via integration by parts) $$ 0= \langle \vec f,\nabla \varphi\rangle = \iint_\Omega \vec f\cdot \nabla \varphi = -\iint_\Omega (\mathrm{div}\vec f)\,\varphi $$ hence $\mathrm{div}\vec f=0$.

Conclusion: $\mathrm{div}\,\nabla s=p_x+q_y$. And since $\mathrm{div}\,\nabla s$ is just the Laplacian of our unknown function $s$, we have a Poisson problem $\Delta s=p_x+q_y$.

(*) Remark: they are actually 1-forms, but in $\mathbb R^n$ there is no harm in calling them vector fields.

Related Question