A strong precalculus course (think AMC, JEE) will teach complex numbers, specifically $n^{\text{th}}$ roots of unity with their properties:
- They lie equally spaced on the unit circle
- A power of an $n^{\text{th}}$ root is another $n^{\text{th}}$ root
- Their sum
What applications do roots of unity have in theoretical or applied math?
Best Answer
Roots of unity underlie the Discrete Fourier transform and their very peculiar properties are the key for the Fast Fourier transform one of the most important algorithm in applied math.
[credits]
The connection between Discrete Fourier transform (DFT) and roots of unity is well decribed by the DFT matrix.
Let see what is the idea behind DFT for a discrete signal $x$ (i.e. a vector) with some detail for the case $N=4$ values.
Let consider a $N^{\text{th}}=4^{\text{th}}$ primitive root of unity $\omega_N=\omega_4=e^{\frac{2\pi i}4}$ and the following vectors
$$v_0=\begin{bmatrix}\omega_4^0\\ \omega_4^0\\ \omega_4^0\\ \omega_4^0\end{bmatrix}=\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix},\; v_1=\begin{bmatrix}\omega_4^0\\ \omega_4^1\\ \omega_4^2\\ \omega_4^3\end{bmatrix}=\begin{bmatrix}1\\ i\\ -1\\ -i\end{bmatrix},\; v_2=\begin{bmatrix}\omega_4^0\\ \omega_4^2\\ \omega_4^4\\ \omega_4^6\end{bmatrix}=\begin{bmatrix}1\\ -1\\ 1\\ -1\end{bmatrix},\; v_3=\begin{bmatrix}\omega_4^0\\ \omega_4^3\\ \omega_4^6\\ \omega_4^9\end{bmatrix}=\begin{bmatrix}1\\ -i\\ -1\\ i\end{bmatrix}$$
It turns out that vectors $v_i$ are an orthogonal basis, since
$$\bar v_i \cdot v_j=N\delta_{ij}=4\delta_{ij}$$
therefore the DFT matrix (also known as $4^{\text{th}}$ order Fourier matrix $\mathbf{F}_{4}$)
$$\mathbf{W}=\mathbf{F}_{4}=\begin{bmatrix}v_0&v_1&v_2&v_3\end{bmatrix}=\begin{bmatrix}1&1&1&1\\ 1&i&-1&-i\\ 1&-1&1&-1\\ 1&-i&-1&i\end{bmatrix}$$
represents a change of basis that is the following transformation and its inverse
$$X= \mathbf{W} x \iff x= \frac1N \mathbf{W}^{-1}X=\frac1N \mathbf{ \overline W} X$$
In this framework, by matrix multiplication, the idea behind FFT is to factorize the DFT matrix in order to reduce the computational effort for the matrix multiplication.
As an example, in the case $N=4$ we obtain
$$\mathbf{F}_{4}= \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & i \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -i \\ \end{bmatrix} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}$$
and more in general for an order $N=2^m$ by this kind of factorization the computational effort is reduced from $N^2$ to $\frac N 2 \log_2 N$.