[Math] How to numerically solve the Poisson equation given Neumann boundary conditions

harmonic functionsnumerical linear algebranumerical methodspartial differential equationspoisson's equation

I want to solve the Poisson equation on a 2D domain given Neumann-type boundary conditions:

The PDE:
\begin{equation}
\nabla^2 \; u(r,\theta) \;=\; f(r,\theta)
\end{equation}

The boundary conditions:
\begin{align}
\frac{\partial u}{\partial r} \big|_{r\rightarrow r_{min}} =&\; A(r_{min},\theta)\\
\frac{\partial u}{\partial r} \big|_{r\rightarrow r_{min}} =&\; A(r_{max},\theta) \\
\frac{\partial u}{\partial \theta} \big|_{\theta\rightarrow \theta_{min}} =&\; B(r,\theta_{min})\\
\frac{\partial u}{\partial \theta} \big|_{\theta\rightarrow \theta_{max}} =&\; B(r,\theta_{max})
\end{align}

The solution to this problem is unique only up to an additive, uniform constant:
\begin{equation}
u(r,\theta) = u(r,\theta) + c\text{,} \qquad \text{where $c\in\mathbb{R}$ and $\vec{\nabla}c=0$.}
\end{equation}

That's fine with me, since I am only interested in the gradient of $u(r,\theta)$ anyway.

I try to solve this equation implicitly using a 2nd order, 2D finite difference (FD) approach, with a centered FD scheme for the first and second derivatives in the interior and a right- or left-sided FD scheme for the boundaries (to avoid using ghost points). On the boundaries, I substitute the boundary constraint functions for the appropriate first derivative. The matrix equation looks vaguely like the following:

\begin{equation}
\begin{bmatrix} & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & M & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ \end{bmatrix} \begin{bmatrix} u_{1,1} \\ u_{1,2} \\ u_{1,3} \\ \dots \\ u_{2,1} \\ u_{2,2} \\ \dots \\ u_{N,M} \end{bmatrix} = \begin{bmatrix} f_{1,1} \\ f_{1,2} \\ f_{1,3} \\ \dots \\ f_{2,1} \\ f_{2,2} \\ \dots \\ f_{N,M} \end{bmatrix}
\end{equation}

The problem is that the matrix is singular. This is to be expected, however, since the solution is not unique! An infinite number of solutions exist, so the matrix equation does not have a unique solution either.

My question is this:

How can I numerically solve this problem?

Should I specify the value of $u(r,\theta)$ at some point to make the solution unique (e.g. $u(r^\star,\theta^\star)=c$)? If so, how can this be done? If not, how else could I solve the equation—without a singular matrix? Is there a way to solve directly for $\vec{\nabla} u(r,\theta)$?

I am happy to use a pre-made matrix solver routine, as long as I know which algorithm (e.g. SVD, LU, etc.) it uses.

I think this is generally applicable!

Best Answer

Matrix has icomplete rank, so the system is solvable only if $$ \operatorname{rank} M = \operatorname{rank} \begin{pmatrix}M \;\big|\; f\end{pmatrix}. $$ Due to truncation error, the last may not hold (but would hold, if you're using a conservative approximation, I suppose).

You can use QR decomposition to deal with that. Suppose $M = QR$ where $Q$ is orthogonal and $R$ is upper triangular matrix. The rank of $R$ is equal to the rank of $M$ and that is one less than number of its rows. So $$ R = \begin{pmatrix} \star & \star & \dots & \star & \star\\ 0 & \star & \dots & \star & \star\\ 0 & 0 & \ddots & \star & \star\\ 0 & 0 & \dots & \star & \star\\ 0 & 0 & \dots & 0 & 0\\ \end{pmatrix} = \begin{pmatrix}R' & h\\0&0\end{pmatrix} $$ Solving the system $Mx = f$ is equivalent to solving the system $$ Rx=Q^\top f \equiv \begin{pmatrix}z'\\\alpha\end{pmatrix}, \alpha \in \mathbb R. $$ For the system to have a solution the last element $\alpha$ of $Q^\top f$ should be zero. Even if it's not, let's fix it as zero. Let's also fix the last element of $x$ as zero, since it can be any number. $$ x = \begin{pmatrix}x'\\0\end{pmatrix} $$ Now we have a nonsingular triangular system $$ R' x' = z' $$ which can easily be solved.

Note, that for the scheme you're using matrix $M$ is sparse, so you need sparse version of $QR$ decomposition. Also, since $Q$ is not sparse for QR even if $M$ was, you need to perform QR for extended matrix $(M|f)$ to compute both $R = Q^\top M$ and $Q^\top f$ at the same time.

Appendix

I've implemented this method in Matlab and also the method of one equation elimination. For correct problems they give the same solution. For testing purposes I used square domain $[-1,1]\times[-1,1]$ and the following problem $$ \Delta u = 0\\ \frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G. $$ This problem has well known solvability criterion, that is $$ 0 = \left(\int_G \Delta u dx dy = \int_{\partial G} \frac{\partial u}{\partial n} d\Gamma\right) = \int_{\partial G} g d\Gamma $$ Due to numerical errors it never can be exactly achieved, so from exact mathematical point of view, the numerical problem is always ill-posed. So we have to deal with regularized problem, that is to solve a problem with $$ \int_{\partial G} g d \Gamma \neq 0. $$ That's where the methods differ. They eliminate the inconsistency in various ways.

Equation elimination

When we are using equation elimination, that is removing one governing equation and replacing it with $$ u_{i,j} = 0 $$ for example, we state that all the inconsistency should eliminated via $(i,j)$ point. In fact, the real problem we are solving now is $$ \Delta u = \delta(x - x_i, y - y_i) \int_{\partial G} g d\Gamma\\ \frac{\partial u}{\partial n} = g(x, y), \quad (x,y) \in \partial G. $$ so we put a source or a sink at $x_i, y_i$ to resolve the inconsistency. For the test problem I want $u(x,y) = \frac{x^2}{2} - \frac{y^2}{2}$ to be the solution, so $$ g(-1, y) = 1,\quad g(1, y) = 1\\ g(x, -1) = -1, \quad g(x, 1) = -1 $$ Do demonstrate the effect of inconsistency I would take $g(x, -1) = 0.5$. enter image description here Note the pole-like thing at the middle - that was the point with $u_{i,j} = 0$ equation. At every other point the function satisfies $\Delta u = 0$ and the boundary conditions.

QR method

Again, let $$ Mu = f $$ be decomposed as $$ Mu \equiv QRP^\top u = f\\ RP^\top u = Q^\top f. $$ I've added permutation matrix $P$ to allow column reordering of the matrix $M$ which improves sparse QR a lot.

Let again, $$ R = \begin{pmatrix} R' & h\\ 0 & 0 \end{pmatrix}, \quad Q^\top f = \begin{pmatrix} z'\\\alpha \end{pmatrix} $$ So we state that $$ x = \begin{pmatrix} R^{-1} z'\\ 0 \end{pmatrix} $$ is a regularized solution to $$ Rx = z \Leftrightarrow Mx = f $$ since $Mx = f$ simply does not have a solution. But what is the real problem we are solving now?

Recall that $M$ is singular, we also know its left zeroing vector. Let $\omega = \frac{1}{\|\omega\|}(1, 1, \dots, 1)^\top$. Then $$ \omega^\top M $$ is simply the sum of every row of $M$ and that is zero (actually, it is implementation-dependent, but the idea remains the same). So the last row of $R$ is nothing more than $\omega^\top M$, and $\alpha = \omega^\top f \sim \int_{\partial G} g d\Gamma$ (up to some constant multiplication).

If we plug the regularized solution $x$ to the system $Rx = z$ we observe a residual: $$ R x \neq z,\quad R x = z + \begin{pmatrix}0\\-\alpha\end{pmatrix} $$ and by multiplying by $Q$ we have $$ Mx = f - \alpha \omega $$ since $\omega$ is exactly the last column of $Q$. So now we are solving a problem where inconsistency is smoothed over the whole domaing $G$ and its border $\partial G$, roughly speaking $$ \Delta u = -\frac{1}{|G|} \int_{\partial G} g d\Gamma $$

This method applied to the same problem gives the following solution enter image description here

It looks fine, but, actually, the equation is violated (slightly) at every node and boundary conditions are also violated (also slightly).

Finally, there's no much difference what method to use if you're solving a well-posed problem, like arising from $$ \nabla \phi = \mathbf E \Rightarrow \Delta \phi = \operatorname{div} \mathbf E. $$

The Matlab code I've used you can find here

Related Question