[Math] Thin Plate Spline interpolation of scattered $z(x,y)$ data

functional-analysisinterpolationlinear algebra

I am trying to understand Thin Plate Spline interpolation of scattered data.
As I understand it TPS is just a special case of Radial Basis Function interpolation:

$$ z(x,y) = p(x,y) + \sum_i l_i\phi(r)$$
where p(x,y) is a polynomial and $\ \phi $ is a RBF.

In the case of TPS: $\ \phi = r^2\ln(r) $

Should not the RBF decrease with r?
In the case of inverse distance weighted interpolation the weight of each point goes from +infinity at r = 0 to 0 at r = + infinity.

Which order of polynomial should I use?
I understand that p(x,y) should compensate for offset and maybe a linear trend,
so that would indicate: p(x,y) = ax+by+c. But why stop there?

The TPS is required to go exactly trough each point:
$$ z_i = p(x_i,y_i) + \sum_j l_j\phi(r_i) $$

This can be written as a matrix equation:
$$ [\Phi, X, Y, 1]*[L, a, b, c]'=Z$$
where $\ \Phi $ is a nxn matrix, X, Y, L and Z are nx1 vectors.

This is underdetermined: there are n + 3 unknowns and n equations.
Therefore one have to impose some additional requirements.
One requires:
$$ \sum_i x_i*l_i = 0$$
$$ \sum_i y_i*l_i = 0$$
$$ \sum_i 1*l_i = 0$$
and calls this orthogonality. But what does this mean?
I am used to orthogonal function expansions and understand that $\ <x,b(x,y)> = 0$
for some basis function,b, would mean that x and b(x,y) are orthogonal and that would be a good thing since it makes it easy to find the coefficients of b by simply taking the inner product of b and the function to be expanded. But in this context? And why is the coefficents of the RBFs that are in the orthogonality requirement and not the RBFs themselves?

However this means that we now have n equations for n unknows.
Introducing $\ P=[X,Y,1] $ and $\ C = [a,b,c]' $ our first matrix equation can be written as:
$$ [\Phi | P][L|C]'=Z $$
and the orthogonality requirements becomes a vertical concatenation to this:
$$ [P' | 0]*[L | C]' = 0 $$

Which method would you recommend to solve this equation?

Which functions does this correspond to in Lapack?

If the number of samples, n, is large this computation becomes heavy.
Is there some easy and sensible way to improve upon this?
I was considering introducing a cut-off radius and any points further away than this would get the $\ \phi $ = 0 value. This would lead to a sparse matrix equation. However this stumbles on my first question: why does not the RBF decay to 0 as r goes to infinity?

Best Answer

This answer will be by no means complete but I can answer few things.


Should not the RBF decrease with r? In the case of inverse distance weighted interpolation the weight of each point goes from +infinity at r = 0 to 0 at r = + infinity.

No it does not have to. Let's have a look at RBF interpolation in 1D. Take $\phi(x) = |x|$ as your RBF and you get standard linear interpolation.

You can define a Lagrange interpolation basis asb$L_i(x) = \sum_j l_j \phi(x-x_j)$ satisfying $L_i(x_j) = \delta_{ij}$. Then the functions $L_i(x)$ behave how you would expect. They are sort of local i.e. they are almost zero except neighborhood of $x_i$.

Example for $\phi(x)=|x|$ and points $x_0=0,x_1=1,x_2=2$. Than:

$$L_0(x) = -0.25|x|+0.5|x-1| +0.25|x-2|$$ $$L_1(x) = 0.5|x|- |x-1|+0.5|x-2| $$ $$L_2(x) = +0.25|x|+0.5|x-1| -0.25|x-2|$$


If the number of samples, n, is large this computation becomes heavy. Is there some easy and sensible way to improve upon this?

Yes there is. If you use thin-plate in 2d than fast multipole method has quite easy form. There is also some moment based method I know nothing about.


With the polynomials I can't help. When I wrote some RBF interpolation I did first least square polynomial interpolation. And then I did RBF interpolation to further improve interpolation.

So first I find polynomial $p$ of given degree which minimize $\sum_i |p(x_i)-f_i|$. And then I find coefficients $l_i$ such that $\sum_i l_i \phi(x_j -x_i) = f_j-p(x_j)$ holds.


For complete information about RBF see this book.

Related Question