Why do we care about convergence of the Laplace transform

generating-functionslaplace transformordinary differential equationsrecurrence-relations

When I took elementary differential equations, with the textbook of Boyce & DiPrima, I learned about using the Laplace transform to solve some initial value problems. I also took a course in combinatorial analysis, where I learned to use generating functions to solve some recurrences. The two methods seem somewhat analogous, and if we write $x=e^{-s}$ we see the Laplace transform
$$\mathcal L\{a(t)\}=\int_0^\infty e^{-st}a(t)dt=\int_0^\infty a(t)x^tdt$$ as the continuous analogue of the ordinary generating function
$$\sum_{n=0}^\infty a_nx^n.$$

Now, in combinatorics class, I was told that convergence of generating functions was unimportant, because we could think of them as formal power series. I would have thought that convergence of Laplace transforms was unimportant for analogous reasons. However, Boyce & DiPrima make a bit of a fuss about convergence, and the short table of Laplace transforms in their book takes the trouble to specify the domain of convergence for each transform.

My question. Why do we care about convergence of Laplace transforms? Or don't we? Or is it that convergence is unimportant for the very elementary applications I learned about, but becomes important in more advanced work?

In other words: Is there some reason we can't use "formal Laplace transforms" to solve differential equations the same way we can use formal power series to solve recurrences?

Best Answer

This is an excellent question. It is one of those subtleties that is often harped on about, and then subsequently forgotten about because it almost never causes trouble. Which then leads to your question: why do we spend time on it?

For now, I will consider the two-sided Laplace transform,

$$\mathcal{L}[x](s) = \int_{-\infty}^{\infty} x(\tau)\,e^{-\tau\,s}\,d\tau,$$

and then move towards the one-sided transform. Suppose we are considering a signal $x:\mathbb{R} \to \mathbb{R}$ that is piecewise-continuous and that it observes the very simple ODE,

$$ \dot{x}(t) = -x(t). $$

It is assumed piecewise-continuous to permit functions like step-functions and one-sided functions. This assumption is also necessary with the two-sided transform since two-sided solutions like $e^{-t}$ do not have a well-defined two-sided Laplace transform: you need to truncate it somewhere so the integral converges. With this caveat, we must also accept that the differential equation is only almost everywhere satisfied since differentiability is not defined at the discontinuous jumps. With that, now consider the initial value problem (IVP),

$$\begin{aligned} \dot{x}(t) &= -x(t),\\ x(0) &= 1. \end{aligned}$$

If you look at this DE, you might first compute the Laplace transform of the DE and find that when you impose the initial condition you arrive at,

$$ X(s) = \frac{1}{s + 1}. $$

So what is $x(t)$? Well, the problem is that there are at least two distinct possibilities,

$$ x_1(t) = \left\{ \begin{array}{ll} -e^{-t} & t < 0 \\ 1 & t = 0 \\ 0 & t > 0 \end{array} \right.\qquad x_2(t) = \left\{ \begin{array}{ll} 0 & t < 0 \\ 1 & t = 0 \\ e^{-t} & t \geq 0 \end{array} \right. $$

Observe that both of these signals solve the initial value problem almost-everywhere and are piecewise continuous. Also notice that they disagree almost everywhere as well. Direct computation can verify that their two-sided Laplace transforms are exactly the same:

$$ X_1(t) = \frac{1}{s + 1},\qquad X_2(t) = \frac{1}{s + 1}. $$

The catch? The two-sided integral that produced these transforms have an ROC that differ. The ROC of $X_1$ is a left half-plane with $\mathfrak{Re}\{s\} < -1$ while the ROC of $X_2$ is a right half-plane with $\mathfrak{Re}\{s\} > -1$!

Interestingly, if we were now to only consider those signals with an ROC that contains some right half-plane of sufficiently large (positive) real value, i.e. contains a domain like,

$$ D_M = \left\{ s \in \mathbb{C} \;\colon\; \mathfrak{Re}\{s\} > M \right\} $$

for some large $M>0$, then the non-uniqueness disappears and the only viable solution is $x_2(t).$ Deciding on an ROC a-priori that captures all the transforms we care about ensures we can determine the inverse of a Laplace transform and that it is precisely the class of solutions we care about. Note that we have to know the ROC a-priori. Or do we...

In ODE problems we often restrict our attention to the one-sided Laplace transform. This is because we are interested in solutions to the IVP for values forward in time and not backwards. That is, we are interested in one-sided signals. We don't want solutions like $x_1$ and instead want solutions like $x_2$. When restricting ourselves to the one-side Laplace transform, we notice that the ROC, if it exists at all, is always going to be some sort of right half-plane. More precisely, one can show that if the integral,

$$\int_0^\infty x(t)\,e^{-s\,t} \, dt$$

converges (absolutely) for some $s=s'$ then it must converge for all $s$ with real part greater than $s'.$ This tells us that if we were to pick an ROC like $D_M$ for sufficiently large $M$, then we can have a Laplace transform operator that has a well-defined inverse on the space of one-sided signals we care about. As long as we choose $M$ large enough, we won't run into any problems since all our signals will have an ROC containing $D_M.$

Since an ROC like $D_M$ is contained in the ROC of the standard functions found in look-up tables, the transforms and their inverses will agree. This is why finding $D_M$ is an often-ignored process. However, if you did have to manually invert the Laplace transform via its inversion formula, you would have to determine the ROC ($D_M$) and integrate on its boundary. The ROC is often ignored because in many many problems, we are dealing with a class of solutions that has that half-plane ROC and we are using a lookup table that aligns with that ROC.

But, we still must be cognizant of it! There are problems (e.g. linear PDEs, two-sided signals) where the ROC matters because we cannot assume our signals are one-sided and only work with the one-sided Laplace transform. If, for instance, you did care about negative time instead of positive time, you'd have to be careful with using the standard look-up tables for the inverse transform.

All transform tables should come with the ROC indicated, to ensure you are working with the correct class of signals and, you also would need to keep track of ROCs when performing algebra of the transforms. Mixing two transforms with differing ROCs may result in undefined results. As a simple illustration, consider the usual one-sided Laplace inverse of $X_1(s)\,X_2(s)$ and then compare this with the two-sided convolution $x_1\star x_2$ of the full signals. They will not agree.