[Math] How to compute Fourier coefficients using a cubic spline-corrected FFT

fourier seriesnumerical methods

I'm not particularly experienced in numerical analysis, and so I recently had quite a massive shock when I discovered that sampling a smooth function and computing the FFT of the result does not return a correct$^1$ list of it's Fourier coefficients, despite theory saying that it ought to!

As an example, let $f(x)=x^2$, and suppose we want to compute it's Fourier coefficients $c_m=\int_0^{2\pi}f(x)e^{imx}\,dx$ on the interval $[0,2\pi]$, with $m\approx50$ or thereabouts. Since $f$ is smooth, it naively seemed reasonable to assume that by sampling $f$ with $n\approx300$ datapoints, computing the FFT would return a fairly accurate list of the $c_m$ for $-50\leq m\leq50$.

As it turns out, this is complete garbage except for the 3 or 4 largest coefficients, and in general the FFT cannot be used to reliably compute Fourier coefficients.

To give a feel for just how mindbogglingly terrible the FFT performs, in order to obtain $c_{50}$ to just two significant digits of accuracy for the simple case of $f(x)=x^2$ in the interval $[0,2\pi]$, one has to sample $f$ with $n\geq 16000$ datapoints! In Mathematica:

f[x_] := x^2;
expr = Integrate[f[x] Exp[I m x], {x, 0, 2 \[Pi]}];
n = 16000;
\[CapitalDelta] = (2 \[Pi])/n;
X = Fourier[
    Table[f[x], {x, 0, 2 \[Pi] - \[CapitalDelta], \[CapitalDelta]}], 
    FourierParameters -> {1, 1}][[2 ;; 51]];
ListLinePlot[
 Table[Abs[(\[CapitalDelta] X[[m]] - N@expr)/N@expr] 100, {m, 50}], 
 PlotLabel -> "Percent Error"]

enter image description here

The absurdity increases when we go to simple 2D functions, such as $f(x,y)=x\sin(x+y)$, graphed below on $[0,2\pi]\times[0,2\pi]$:

enter image description here

The function shown above has to be sampled on a $160000\times160000$ grid in order to achieve 3 significant digits of accuracy for $c_{50,50}$ and similar coefficients, a computation which would require over 200 gigabytes of RAM just to hold the raw data, let alone compute its FFT! Thus the use of the FFT to compute Fourier coefficients of smooth multivariate functions is unfeasible, unless you're interested in the largest coefficients, or you have access to a cluster-computing facility.

As a result, I was quite relieved when I read Section 13.9 of "Numerical Recipes in C", which describes how bad the FFT is for computing Fourier integrals, and provides a correction which uses cubic splines to hugely increase the accuracy, making the FFT a viable method of computing Fourier coefficients. However, I am having trouble deriving the formulas in the paper (the derivation is not given), and would love some help.

In the case of the cubic correction, the article uses $\psi$ functions which is a cubic Lagrange interpolating approximation of the Kronecker delta function. The $W(\theta)$ function is then given by
$$W(\theta)=\int_0^{2\pi}\psi(s)e^{i s\theta}\,ds=\frac{\left(\theta ^2+6\right) (-4 \cos (\theta )+\cos (2 \theta )+3)}{3
\theta ^4}.$$

I tried to figure out what exactly the author meant by "cubic Lagrange interpolating" by rederiving the result in Mathematica (copy/paste into Mathematica and hit Ctrl-Shift-N to convert to a more readable form):

CubicLagrangeInterpolation[X_, Y_, x_] := 
  Sum[Y[[j]]*Product[If[j == k, 1, (x - X[[k]])/(X[[j]] - X[[k]])], 
           {k, 1, 4}], {j, 1, 4}]; 
\[Psi][x_] := 
  Piecewise[{{0, 
     x < -1}, {CubicLagrangeInterpolation[{-2, -1, 0, 1}, {0, 0, 1, 
       0}, x], -1 <= x <= 0}, 
         {CubicLagrangeInterpolation[{-1, 0, 1, 2}, {0, 1, 0, 0}, x], 
     0 <= x <= 1}, {0, x > 1}}]; 
FullSimplify[
 Integrate[\[Psi][s]*Exp[I*\[Theta]*s], {s, -Infinity, Infinity}]]

which yielded $\frac{\theta ^2-2 \left(\theta ^2+3\right) \cos (\theta )-2 \theta \sin (\theta )+6}{\theta ^4}$, which when plotted appears somewhat similar to, but is not quite the same as the $W(\theta)$ given in the paper.

My question is:

  • Does anyone with experience in this sort of spline-correction
    procedure know how the author proceeded to get his expressions for
    the $W,\alpha_0,\alpha_1,\alpha_2$ and $\alpha_3$ functions?
    Basically, I just want to know what expression he used for the $\psi$
    and $\varphi$ basis functions.

Alternately, am I simply implementing Lagrange interpolation incorrectly?

$^1$: From a relative (or $\%$) error perspective.

Best Answer

For cubic interpolation $\psi$ has four piecewise parts, not two as you have used. On the additional interval $[-2, -1]$ use $(x+3)(x+2)(x+1)/6$ and on $[1,2]$ use $-(x-1)(x-2)(x-3)/6$. So yes, a mistake in your Lagrange interpolation code.

As noted in the numerical recipes reference, the correction terms $\varphi_k$ are a bit tedious. Let me restrict to the example of cubic interpolation. So in this case we want to approximate some function $f$ on $[0, M]$ (where I'll take $M$ large enough to circumvent pathological situations) piecewise by Lagrange polynomials. The basic idea is on the interval $[k, k+1]$ to interpolate based on the values of $f$ at $k-1, k, k+1, k+2$. This is what is achieved by the $\psi$ kernel:

$$f(x) \approx \sum_{k=0}^M f(k)\ \psi(x - k).$$

Now this works fine except for intervals near the end point. I'll discuss the left end point situation only, the right one is similar but kernels are reflected by $x \leftarrow -x$.

On the interval $[0, 1]$ we actually want to interpolate based on values at $0, 1, 2, 3$ instead of at $-1, 0, 1, 2$ as is done in the sum above (where $f$ is understood to vanish at negative arguments). In particular:

  • $f(0)\ \psi(x)$ has an incorrect contribution on the interval $[-2, 1]$
  • $f(1)\ \psi(x-1)$ has an incorrect contribution on $[-1, 1]$
  • $f(2)\ \psi(x-2)$ has an incorrect contribution on $[0, 1]$
  • $f(3)\ \psi(x-3)$ has no contribution on $[0,1]$

So we need $\varphi_0, \ldots, \varphi_3$ to correct for this as follows (where every $\varphi_k$ vanishes outside the specified intervals):

  • $\varphi_0(x)$ equals $-\psi(x)$ on $[-2, 0]$ and $-(x-1)(x-2)(x-3)/6 - \psi(x)$ on $[0, 1]$
  • $\varphi_1(x)$ equals $-\psi(x)$ on $[-2, -1]$ and $(x+1)(x-1)(x-2)/2 - \psi(x)$ on $[-1, 0]$
  • $\varphi_2(x)$ equals $-(x+1)(x+2)(x-1)/2-\psi(x)$ on $[-2, -1]$.
  • $\varphi_3(x)$ equals $(x+1)(x+2)(x+3)/6$ on $[-3, -2]$

Then the left end point corrected approximation of $f$ reads:

$$f(x) \approx \sum_{k=0}^M f(k)\ \psi(x-k) + \sum_{k=0}^3 f(k)\ \varphi_k(x-k).$$

And a similar correction follows for the right end point.

Related Question