Fast Fourier Frames (FFF), do they exist, and if so, how to calculate them

algorithmscomputational mathematicsfourier analysisreference-requestsoft-question

Short background to question: Maybe one of the most famous mathematical transforms is the Fourier Transform, which has countless applications across all possible sciences and engineering branches. One very important discovery was the Fast Fourier Transform (FFT) which made it possible to calculate Discrete Fourier Transforms (DFT) faster than ever before.

The Fourier transform is a unitary transform. This means energy is preserved. The basis functions span a basis for the space which is investigated. This allows for one unique representation. However it is not always unproblematic. For example if we want to represent a sine wave which has a frequency which is a non integer fraction of the fundamental frequency of the FFT, we will not be able to represent it perfectly. This is well known among for example signal and audio engineers. But what we can do is to add this sine wave of odd frequency as a vector. What we then get is a frame or an overcomplete representation of all possible functions we can transform.


Now to the question, does there exist any systematic way to construct Fourier Frames in a way which can somehow utilize the great speed gains of the FFT? In other words, Fast Fourier Frames (FFF), are they possible?

Best Answer

Here is an approach which I have found, but please don't be discouraged from answering if you also have ideas. I often upvote anything helpful even if it is not the answer I seek.


Viewed in the language of matrices, the FFT provides us a factorization of the DFT matrix $\bf D$:

$$\bf D = F_N\cdots F_1$$

Where each $\bf F_k$ is a sparse matrix with elements of each row only non-zero in a small prime number of positions. If number of samples (vector space dimension) considered is a power of two, this prime is often $2$. We will show how to utilize this factorization as a part of building a Fourier frame. First we will just stress the incredible smoothness of the harmonic functions $\sin, \cos$ which are basis functions of the Fourier transform.

What is nice about smooth functions is that even with relatively primitive interpolation techniques, we will get high precision. We will not make any proof of this here, but we will consider a linear interpolation on two samples. It basically is a linear weighting with sum 1. Each function value we can calculate this way. This requires 2 non-zero values for each row in a matrix - same as for the $\bf F_k$ matrices above!

So assume we have a set of matrices performing this linear interpolation $\bf P_k$ with some scaling, say for example $t\to \alpha_k t$, where for example $\alpha_k =1.10$ would mean our new frame vectors would be sines and cosines 10% stretched in time dimension.

We could basically calculate sets of $N-1$ different $\alpha_k$ and still only need to double the computational load, since all of them would benefit from the initial $\bf D$ factorization into the $\bf F_k$. So if we do this cleverly, for example with a filter network, we can save huge amounts of computations.

The computations above would be $$\bf P_1 F_N \cdots F_1\\\vdots\\P_N F_N \cdots F_1$$

Where the common part $\bf D$ would be calculated jointly and then fed to $N$ different branches in a filter network each multiplying with a sparse linear interpolation matrix like the one below. When applied it shrinks all frequencies of the original FFT by a factor $\alpha = 1.05 \approx 63/60$.

Example of 64 sample interpolation matrix P

Related Question