Dirichlet conditions and the fundamental period

fourier analysisfourier seriessignal processing

It's well known that Dirichlet conditions are:

  1. Over any period $x(t)$ must be absolutely integrable; that is,
    $$\int_T|x(t)|dt\lt\infty$$

  2. In any finite interval of time, $x(t)$ is of bounded variation; that is, there are no more than a finite number of maxima and minima during any single period of signal.

  3. In any finite interval of time, there are only a finite number of discontinuities. Furthermore, each of these discontinuities is finite.

Source: Signals and systems
by Alan V. Oppenheim

First of all is it assumed that fundamental period for $x(t)$ exists? Secondly, what's the exact meaning of the second condition? How does "bounded variation in any finite interval of time" equivalent to "finite number of maxima and minima during any single period of signal"?

Best Answer

  1. Most periodic functions that one runs into in practice do have a fundamental period, so usually that's not a problem. However, for the purposes of these conditions, having a fundamental period is not essential. For instance, constant functions (e.g. $x(t) = 0$) are periodic, but have no fundamental period (a fundamental period is positive by most definitions). Dirichlet's theorem still applies perfectly well to constant functions, though it is rather trivial for these cases.

  2. Bounded variation is a technical condition whose definition you can find here. Unfortunately, your book appears to be a little imprecise on this matter compared to a typical rigorous mathematical treatment: it is not true that a signal of bounded variation on an interval $[a,b]$ can only have finitely many extrema on that interval, even if the signal is continuous. An example is the signal $$ x(t) = \begin{cases} t^2\sin(1/t) & 0 < t \leq 1,\\ 0 & t = 0 \end{cases} $$ extended periodically and continuously to $\mathbb{R}$. This signal has infinitely many extrema as you approach $t=0$, but is well-known to be of bounded variation nonetheless. The idea for bounded variation is in the name: the condition limits the amount that the signal is allowed to oscillate in any finite interval. The main way signals fail to be of bounded variation is by having infinitely many oscillations of significant magnitude in a finite timespan. The classical example of a continuous signal that fails to be of bounded variation is $$ x(t) = \begin{cases} t\sin(1/t) & 0 < t \leq 1,\\ 0 & t = 0 \end{cases} $$ extended continuously and periodically. If we allow discontinuous signals then $$ x(t) = \begin{cases} \sin(1/t) & 0 < t \leq 1,\\ 0 & t = 0 \end{cases} $$ is another example with worse oscillations. The first example, $t^2\sin(1/t)$, is of bounded oscillation because although the function exhibits infinitely many oscillations near $t=0$, the $t^2$ factor kills their amplitude enough so that the total variation (also defined in the link) does not become infinite. However, the other implication given by your textbook is true: a periodic signal with finitely many minima and maxima in any finite interval of time does have finite total variation, and is thus of bounded variation. So in summary, finitely many extrema implies bounded variation (on any finite time interval), but bounded variation does not imply finitely many extrema.