Polyharmonic interpolation with mixed boundary conditions, variational formulation

boundary value problemgreens functioninterpolationreproducing-kernel-hilbert-spacessobolev-spaces

I am reading up on polyharmonic interpolation, though I want to use a different domain and boundaries than what is used by Duchon and Meinguet (notably the Green's functions do not even have a closed form solution in the general setting). I have an engineering background so I am missing quite some rigour when it comes to Sobolev spaces, weak formulations and so on. Thus I would appreciate it if someone could clarify some of my concerns or point me towards relevant literature. I need some help regarding the spaces involved, as well as the derivation of the weak formulation, and for when this is a well-posed problem.

Polyharmonic Interpolation Problem

Let $(\vec{x}_1,y_1),\ldots,(\vec{x}_n,y_n)$ be data that we want to interpolate. Here $\mathcal{X} = \{\vec{x}_1,\ldots,\vec{x}_n\,:\,\vec{x}_i\in\mathcal{M}\}$ are the interpolation points which are from some manifold $\mathcal{M}$, and $y_i$ are the interpolation values. That is, we want a function $u\in\mathcal{U}$ from some space $\mathcal{U}$ such that $u(\vec{x}_i) = y_i$. An often used interpolation space for $\mathcal{M} = (a,b)\subset \mathbb{R}$ is the space of splines: i.e. piecewise polynomial functions of at most degree $p-1$ ($p\in\mathbb{N}_+$) that are in $C^{p-2}\bigl((a,b)\bigr)$. For odd degree splines $p=2m, \, m\in\mathbb{N}_+$, the problem may be equivalently rewritten as the boundary value problem:
\begin{alignat}{3}
(-1)^m\left(\frac{d}{dx}\right)^{2m} u(x) = \left(-\frac{d^2}{dx^2}\right)^{m} u(x) & = 0, &\quad& x\in (a,b)\setminus\mathcal{X}, \\
u(x_i) &= y_i, &\quad& 1\leq i \leq N.
\end{alignat}

Generally one has to specify an additional $m$ boundary conditions at each of the boundaries $a$ and $b$ in order to get a unique solution. Standard boundary conditions for the spline interpolation problem include: natural, Hermite, and periodic, though any boundary conditions that produce a unique solution are viable. A generalisation of $\frac{d^2}{dx^2}$ to higher dimensions is the Laplacian $\Delta = \sum_{j=1}^d \partial_{jj}$. Then we may use this to generalise odd-degree splines to functions that are $m$-harmonic on $\mathcal{M}\setminus\mathcal{X}$:
\begin{alignat}{3}
\left(-\Delta\right)^{m} u(\vec{x}) & = 0, &\quad& \vec{x}\in \mathcal{M}\setminus\mathcal{X}, \\
u(\vec{x}_i) &= y_i, &\quad& 1\leq i \leq N,
\end{alignat}

and again we need to specify $m$ boundary conditions on the boundary $\partial\mathcal{M}$. In order to better understand what boundary conditions may be suitable, we can try to study the variational formulations of the problem.

Variational Formulation for $u,v\in H^{2m}(\mathcal{M}\setminus\mathcal{X})$

The simplest variational formulation considers $u\in H^{2m}(\mathcal{M}\setminus\mathcal{X})$, then we may simply formulate the bilinear form and corresponding energy as follows
\begin{equation}
B(v,u) = \int_{\mathcal{M}\setminus\mathcal{X}} v\,(-\Delta)^m u,\quad E(u) = B(u,u) = \int_{\mathcal{M}\setminus\mathcal{X}} u\,(-\Delta)^m u.
\end{equation}

To get the stationary points we must solve the equation $\frac{\delta E}{\delta u} = 0$ where $\frac{\delta E}{\delta u}$ is the functional derivative of $E$. To find the functional derivative we can use its relationship to the Gateaux derivative $\frac{dE}{dv}$
\begin{equation}
\int_{\mathcal{M}\setminus\mathcal{X}}v\,\frac{\delta E}{\delta u} = \frac{dE}{dv}(u) = \lim_{\epsilon\to 0}\frac{E(u+\epsilon v)-E(u)}{\epsilon}.
\end{equation}

Taking the Gateaux derivative yields
\begin{align}
\frac{dE}{dv}(u) &= \lim_{\epsilon\to 0}\frac{E(u+\epsilon v)-E(u)}{\epsilon} \\
&= \lim_{\epsilon\to 0}\frac{E(u)+\epsilon B(u,v) + \epsilon B(v,u)+\epsilon^2 E(v)-E(u)}{\epsilon} \\
&= B(u,v)+B(v,u) \\
&= \int_{\mathcal{M}\setminus\mathcal{X}}u\,(-\Delta)^m v + \int_{\mathcal{M}\setminus\mathcal{X}}v\,(-\Delta)^m u.
\end{align}

The $B(v,u)$ term is already in the form that we want it for the functional derivative. It remains to reformulate $B(u,v)$ so that there is $(-\Delta)^m$ in front of $v$. Let $\Gamma = \partial\mathcal{M}\cup\mathcal{X}$, I do so by applying the divergence theorem $2m$ times as follows:
\begin{align}
B(u,v) &= \int_{\mathcal{M}\setminus\mathcal{X}} u \, (-\Delta)^m v\\
&= (-1)^m\int_{\mathcal{M}\setminus\mathcal{X}} \nabla \cdot\bigl(u \, (\nabla\Delta^{m-1} v)\bigr) – \int_{\mathcal{M}\setminus\mathcal{X}}\nabla u \cdot \nabla\bigl((-1)^m\Delta^{m-1} v\bigr) \\
&= (-1)^m\int_{\Gamma}u \, (\partial_{\vec{n}}\Delta^{m-1} v)-\int_{\mathcal{M}\setminus\mathcal{X}}\nabla \cdot \biggl( \nabla u\, \bigl((-1)^m\Delta^{m-1} v\bigr)\biggr) + \int_{\mathcal{M}\setminus X} (-\Delta) u\, (-\Delta)^{m-1} v \\
&= -\int_{\Gamma}u \, \bigl(\partial_{\vec{n}}(-\Delta)^{m-1} v\bigr) + \int_{\Gamma}(\partial_{\vec{n}}u) \, (-\Delta)^{m-1} v + \int_{\mathcal{M}\setminus X} (-\Delta) u\, (-\Delta)^{m-1} v \\
&= \quad \textrm{apply the above procedure another $m-1$ times} \\
&= \sum_{l=0}^{m-1}\int_{\Gamma}(\partial_{\vec{n}}(-\Delta)^lu) \, (-\Delta)^{m-1-l} v – \sum_{l=0}^{m-1}\int_{\Gamma}(-\Delta)^lu \, \bigl(\partial_{\vec{n}}(-\Delta)^{m-1-l} v\bigr) + \int_{\mathcal{M}\setminus X} (-\Delta)^m u\, v \\
&= \sum_{l=0}^{m-1}\int_{\Gamma}(\partial_{\vec{n}}(-\Delta)^lu) \, (-\Delta)^{m-1-l} v – \sum_{l=0}^{m-1}\int_{\Gamma}(-\Delta)^lu \, \bigl(\partial_{\vec{n}}(-\Delta)^{m-1-l} v\bigr) + B(v,u).
\end{align}

Thus for the functional derivative we have
\begin{align}
\int_{\mathcal{M}\setminus \mathcal{X}} v\, \frac{\delta E}{\delta u} &= \frac{dE}{dv}(u) = B(v,u) + B(u,v) \\
&=\sum_{l=0}^{m-1}\int_{\Gamma}(\partial_{\vec{n}}(-\Delta)^lu) \, (-\Delta)^{m-1-l} v – \sum_{l=0}^{m-1}\int_{\Gamma}(-\Delta)^lu \, \bigl(\partial_{\vec{n}}(-\Delta)^{m-1-l} v\bigr) + 2\int_{\mathcal{M}\setminus X} (-\Delta)^m u\, v
\end{align}

In order to get the desired form we may require that the boundary integrals vanish. For the moment we will ignore the part of the boundary $\mathcal{X}$ as we will treat it later (just assume that the integrals over $\mathcal{X}$ vanish for now). For the remaining part of the boundary $\partial\mathcal{M}$ we may require that the integrals cancel out, e.g. periodic boundary conditions on rectangle $\mathcal{M}$ (should produce results similar to a toroidal manifold with no boundary?). A simpler constraint would be to have that either $(-\Delta)^lu(\partial\mathcal{M})=0$ or $\partial_{\vec{n}}(\Delta^{m-1-l}v)(\partial\mathcal{M})=0$ for $0\leq l \leq m-1$, and additionally that either $\partial_{\vec{n}}(-\Delta)^lu(\partial\mathcal{M})=0$ or $(\Delta^{m-1-l}v)(\partial\mathcal{M})=0$ for $0\leq l \leq m-1$. According to Gazzola et al (page 36, 2.23) however, some of those constraints should not be set simultaneously, e.g. $\partial_{\vec{n}}(-\Delta) u(\partial\mathcal{M}) = 0$ and $(-\Delta) u(\partial\mathcal{M}) = 0$. Eitherway, for a unique solution I would assume that $m$ of those constraints should be chosen to be zero for $u$, which would automatically define the $m$ constraints where $v$ must be set to zero.

What bother me here is that I am not sure that the spaces for $u$ and $v$ are the same, as I believe that it may happen that the boundary conditions for $u$ and $v$ differ (or maybe this is impossible if the complementary conditions mentioned in Gazzola et al's book hold?). The only thing different spaces remind me of is the Petrov-Galerkin method and Babuška–Lax–Milgram. Is the latter require instead of Lax-Milgram in this specific case?

Variational formulation for $v\in H^m(\mathcal{M}\setminus\mathcal{X})$

We may require that $v$ is only from $H^m(\mathcal{M}\setminus\mathcal{X})$ instead of $H^{2m}(\mathcal{M}\setminus\mathcal{X})$. Then we may distinguish between two cases: when $m$ is even $m=2q, \, q\in\mathbb{N}_+$, and when $m$ is odd $m=2q+1, \, q\in\mathbb{N}_0$. Then the corresponding bilinear form is given as
\begin{align}
B(v,u) = \begin{cases}
\int_{\mathcal{M}\setminus\mathcal{X}} \Delta^{2q} v\, \Delta^{2q} u, &\text{for } m = 2q, \, q\in\mathbb{N}_+, \\
\int_{\mathcal{M}\setminus\mathcal{X}} \nabla (\Delta^{2q} v) \cdot \nabla (\Delta^{2q} u), &\text{for } m = 2q+1, \, q\in\mathbb{N}_0.
\end{cases}
\end{align}

To get the Euler-Lagrange equations we proceed in the same manner as in the $H^{2m}$ formulation, however this time around only derivatives up to order $2m$ instead of $4m$ will be involved for $v$. I am still bothered by the fact that higher orders of $u$ will be required in in the Euler-Lagrange form, but I am guessing that this is to be expected since the BVP is not the weak formulation.

Vanishing Boundary Terms on $\mathcal{X}$

Regarding the boundary terms over the points, $v(\vec{x}_i) = 0$ since I prescribe Dirichlet constraints at those. I am not so sure what to do about the remaining terms though, since that would imply also specifying partial derivatives at the points from $\mathcal{X}$.

I assume that instead of doing this I can leverage the fact that $\mathcal{X}$ is a set of points. Then if $\mathcal{M}$ is more than one-dimensional I guess that one could argue that the point is a set of measure zero with respect to the integration measure, and as long as the integral doesn't have a singularity then it is zero. Another option I thought of (that should supposedly work also in one dimension) was to think of the integral as an integral over a (hyper)sphere, whose radius goes to zero. Then if a function $g$ is enough times continuously differentiable at a point $\vec{x}_i$, I would have that the directional derivatives cancel out with their opposites:

$$-\partial_{-\vec{n}}g(\vec{x}_i) = -\lim_{\epsilon\to 0}\frac{g(\vec{x}_i+\epsilon \cdot (-\vec{n}))-g(\vec{x}_i)}{\epsilon} = \lim_{\epsilon\to 0}\frac{g(\vec{x}_i+\epsilon\cdot\vec{n})-g(\vec{x}_i)}{\epsilon}= \partial_{\vec{n}}g(\vec{x}_i).$$

Does this make sense?

Reproducing Kernel Hilbert Spaces and Singular Green's Functions

I am not sure that the above arguments are sufficient for the boundary integrals to be zero at $\mathcal{X}$: I suppose I would need for the integrands to not have a strong enough singularity, which can make the integrals non-zero even if I am integrating over a set of measure zero. I have read for instance that if $2q>d/2$ then under some assumptions on $\mathcal{M}$ one has that $H^{2q}(\mathcal{M})$ is a reproducing kernel Hilbert space (RKHS), so I would assume that the interpolation problem is well-posed in that case. As a counterexample, let $\mathcal{M}=\mathbb{R}^2$ and $q=\frac{1}{2}$ (i.e. $2q\leq d/2$), then the Green's function $G(\vec{x},\vec{y})$ of the Laplacian $\Delta$ has a logarithmic singularity $G(\vec{x},\vec{y}) = \frac{1}{2\pi}\ln\|\vec{x}-\vec{y}\|_2$. So I would expect the interpolation problem to be ill-posed in this case. Does this singularity spoil the term that is supposed to vanish, i.e. $\int_{\mathcal{X}}v \, \partial_n u$ (here $v(\vec{x}_i)=0$)? Or would something else in Lax-Milgram break? I would expect that the solution simply doesn't exist in the counterexample, since the interpolation matrix has infinite terms on the diagonal. However, I have no idea what specifically breaks from the BVP and the weak formulation perspective (i.e. without relying on the closed form solution of the Green's function to argue about the interpolation matrix). Also I should probably note that even in the RKHS case ($2q > d/2$), for $\mathbb{R}^d$ one needs to consider Beppo-Levi spaces for a unique solution (as solutions are defined up to multivariate polynomials).

Edit: So that there is no confusion regarding ktoi's answer I am noting that I changed $\mathcal{M}$ to $\mathcal{M}\setminus\mathcal{X}$. I changed it because I meant to have $u$ not being $2q$-harmonic on $\mathcal{X}$ as ktoi correctly pointed out.

Edit 2: I made a major rewrite to make this clearer and more general.

Best Answer

This is a pretty loaded question on a topic that I'm not familiar with (at least this specific problem), but hopefully the below should give a clearer picture of what's going on here. You may want to consult a graduate text on Sobolev spaces and elliptic BVPs to aid with following the below discussion, such as the relevant chapters of Evans.

Point Dirichlet boundary conditions

Since we prescribe $u$ on a discrete set $\mathcal X$, we impose Dirichlet boundary conditions at each $x_i \in \mathcal X$. However Sobolev functions are almost defined up to a negligible set, so it's a-priori not clear how to impose this requirement. You mention the requirement $2q > d/2$ however, in which case Sobolev embedding ensures that $H^{2q}(\mathcal M)$ embeds into $C(\mathcal M)$. That is every $v \in H^{2q}(\mathcal M)$ admits a continuous representative, which allows one to talk about pointwise values of $v$.

As a non-specialist I don't know how you would treat the case $2q \leq d/2$. At least using the usual approach via a weak formulation would not work, because your space of solutions is too rough to encode the point boundary conditions, so an alternative approach would be necessary. Perhaps this also leads to ill-posedness, and this motivates considering this problem with arbitrary/large $q$.

Hence I will assume $2q > d/2$ in what follows, to circumvent this difficultly. We can then seek solutions $u \in H^{2q}(\mathcal M)$ with prescribed values $u(x_i) = y_i$ for each $x_i \in \mathcal X$.

Weak formulation, first attempt

You approach to the weak formulation is largely correct; here if $u$ and $v$ are sufficiently regular (it suffices to assume they lie in $H^{4q}(\mathcal M)$) then we get \begin{align} \tag{$\dagger$} \int_{\mathcal{M}}v\,(-\Delta)^{2q}u =\sum_{l=0}^{q-1}\int_{\partial\mathcal M}\Delta^lv\,\partial_{\vec{n}}(\Delta^{2q-1-l} u)-\sum_{l=0}^{q-1}\int_{\partial\mathcal M}\partial_{\vec{n}}(\Delta^l v)\,(\Delta^{2q-1-l}u) + \int_{\mathcal{M}}\Delta^q v \, \Delta^q u, \end{align} which is exactly as you derive, except that the boundary integrals are only over $\partial\mathcal M$. This is because this holds for any $u,v \in H^{4q}(\mathcal M)$, and we are not considering $\mathcal X$ yet.

Now if $u$ satisfies the BVP, then the left-hand side and the boundary integrals vanish to give $$ \int_{\mathcal M} \Delta^q v \Delta^q u = 0 \quad \forall\, v \in H^{2q}(\mathcal M). \tag{$\mathrm{WF}_1$} $$ Hence a weak solution $u$ is an element $u \in H^{2q}(\mathcal M)$ such that $u(x_i) = y_i$ for all $x_i \in \mathcal X$ and $(\mathrm{WF}_1)$ holds.

Edited to add: If we have $(-\Delta^{2q})u = 0$ in $\mathcal M \setminus \mathcal X$, then the term on the left-hand side still vanishes provided $v$ vanishes on $\mathcal X$, since the product $v \, (-\Delta^{2q})u$ still vanishes everywhere on $\mathcal M$. However since $(\dagger)$ holds for general $u,v \in H^{4q}(\mathcal M)$ without any additional requirements, we do not expect to obtain boundary terms supported on $\mathcal X$.

However as we will see below, this formulation is not so natural, and we will later see that this is not well-posed in general. Hence I will consider a slightly different equation, which is what I think you want to consider.

Weak formulation, second attempt

Instead it's useful to use the variational formulation of this problem; my guess is that one is really interested in the minimisation problem $$ \min \left\{ \int_{\mathcal M} \lvert \Delta^{q}v \rvert^2 \ \colon\ v \in H^{2q}(\mathcal M), v(x_i) = y_i \ \forall 1 \leq i \leq N \right\}.$$ We can try to derive the associated Euler-Lagrange system. Here the set of admissible variations becomes $$ \mathcal A_{2q,\mathcal X} = \left\{ v \in H^{2q}(\mathcal M) \ \colon \ v(x_i) = 0 \ \forall 1 \leq i \leq N.\right\}. $$ Then if $u$ is a minimiser, we can consider variations $u + tv$ with $v \in \mathcal A_{2q,\mathcal X}$ and differentiate the functional in $t$. This leads to the weak formulation $$ \int_{\mathcal M} \Delta^qv \Delta^qu = 0 \quad \forall\, v \in \mathcal A_{2q,\mathcal X}. \tag{$\mathrm{WF}_2$}$$ One can verify that this implies that $u$ solves $(-\Delta)^{2q}u = 0$ in $\mathcal M \setminus \mathcal X$, along with the normal conditions on $\partial M$. I believe this is really the weak formulation that you are after, as it does not make sense for $u$ to be polyharmonic at the point $\mathcal X$.

As a side-note, deriving the boundary condition on $\partial M$ is fairly standard; assuming $u,v$ are sufficiently regular one first tests the weak form against $v \in H^{2q}_0(\mathcal M) \cap \mathcal A_{2q,\mathcal X}$, that is all $v$ which vanishing up to order $2q-1$ on the boundary. In this case we are free to integrate by parts to show that $(-\Delta)^{2q}u = 0$, however since $v$ vanishes on $\mathcal X$ we only know this on $\mathcal M \setminus \mathcal X$. Then testing against a general admissible function $v \in \mathcal A_{2q,\mathcal X}$ and using $(\dagger)$, since the boundary terms vanish for all $v$ the conditions on $u$ follow.

Edited to add more details: To derive the equation on $\mathcal M \setminus \mathcal X$ we can consider variations $v \in C^{\infty}_c(\mathcal M \setminus \mathcal X)$, so it vanishes in a neighbourhood of $\partial M \cap \mathcal X$. Here we choose it to vanish near $\partial M$ so the boundary terms do not appear when integrating by parts, and also near $\mathcal X$ so it lies in our admissible set. Then we can verify that $(-\Delta)^{2q}u = 0$ in a distributional sense on $\mathcal M \setminus X$. Somewhat informally we are thinking about the term $$ \int_{\mathcal M} v (-\Delta)^{2q}u \,\mathrm{d} x = 0 $$ (this is informal as a-priori $u \in H^{2q}(\mathcal M)$). We can choose $v$ arbitrarily on $\mathcal M \setminus \mathcal X$ so $(-\Delta)^{2q}u$ must vanish there, however on $\mathcal X$ we can't say anything since $v$ always vanishes there.

Well-posedness

Edited to add: My variational formulation was originally incorrect, and I assumed the space of zero solutions were just polynomials. However since the minimisers are functions satisfying $\Delta^qu = 0$, it seems like this space is much larger than I thought. Hence the below does not appear to be correct.

I will now establish an existence result for the second formulation $(\mathrm{WF}_2)$. If we can also show uniqueness, this would imply $(\mathrm{WF}_1)$ is ill-posed since we are imposing additional conditions on top.

First observe that the conditions on $\partial\mathcal M$ is not enough to uniquely determine the solution $u$; indeed any polynomial $p$ of degree at most $2q-1$ satisfies the BVP when $\mathcal X = \emptyset$. This is an artifact of the Neumann-type boundary condition, and can be rectified by working in the space $$ \tilde{\mathcal{A}}_{2q,\mathcal X} = \mathcal A_{2q,\mathcal X} / \mathcal P_{2q-1}. $$

Here $\mathcal P_{2q-1}$ is the space of polynomials of degree at most $2q-1$, and we consider equivalence classes $\{ v + p : p \in \mathcal P_{2q-1}\}$ associated to each $v \in \tilde{\mathcal A}_{2q,\mathcal X}$.

I'll also mention here that since we're on an arbitrary manifold $\mathcal M$ it's not clear what a polynomial would be, but this depends on how you define $\nabla$ and $\Delta$. One would replace $\mathcal P_{n-1}$ by the space of functions solving the null problem when $\mathcal X = 0$, that is functions satisfying $\nabla^{2q}p = 0$ in $\mathcal M$.

Now we can apply Lax-Milgram in this quotient space. However since we have a inhomogenous boundary condition, we transform the problem to have a forcing term with homogenous boundary conditions. That is, pick an element $u_0 \in H^{2q}(\mathcal M)$ satisfying $u_0(x_i) = y_i$ for all $x_i \in \mathcal X$, and solve for $w = u - u_0 \in \mathcal A_{2q,\mathcal X}$ satisfying the weak formulation $$ \int_{\mathcal M} \Delta^qv \Delta^q w = - \int_{\mathcal M} \Delta^qv \Delta^q u_0 $$ for all $v \in \mathcal A_{2q,\mathcal X}$. Applying Lax-Milgram in $\tilde{\mathcal A}_{2q,\mathcal X}$ gives an equivalence class of solutions. Picking out an element $w$ from this equivalence class that lies in $\mathcal A_{2q,\mathcal X}$, we obtain the desired solution $u = w + u_0$.

As a side-note, one can also find a minimiser of the variational problem (using the Direct method), but once again one needs to quotient out the symmetries. Here $\Delta^q(v+p) = \Delta^q v$ for all $p \in \mathcal P_{2q-1}$.

We're not quite done yet however, as we know that every $u + p$ satisfies the BVP exception for possibly the prescribed values on $\mathcal X$. This is satisfied if and only if $p(x_i) = 0$ for all $x_i \in \mathcal X$. Generically, we expect that the only polynomial $p \in \mathcal P_{2q-1}$ that vanishes on $\mathcal X$ is the zero polynomial, in which case we do have uniqueness. In $\mathbb R^d$ this does hold if $N > 2q-1$, however this is certainly not so if $\mathcal X = \emptyset$. We will only have uniqueness of the solution if this holds however, otherwise we will need to account for a possible family of solutions.

Since the equation is linear, well-posedness (depending on how you define it) should follow from the existence + uniqueness of solutions. Thus we see that $(\mathrm{WF}_2)$ is generically well-posed, in which case $(\mathrm{WF}_1)$ will be ill-posed since we're requiring additional conditions on top.

Related Question