I was watching a video and the lecturer discusses the function
$$\frac{1}{1+x^2} = \sum_{n=0}^{\infty} {(-1)^n x^{2n}}$$
$$|x| < 1$$
explaining that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$. Then, he goes on to talk about how real analysis is a glimpse into complex analysis.
In the same video, the lecturer also provides the following example where a complex function is defined by using real Taylor series:
\begin{align}
e^{ix}
&= \cos(x) +i \sin(x) \\
&= \sum_{n=0}^\infty \frac{(ix)^n}{n!} \\
&= \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{2n!} + i \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}
\end{align}
Can someone help elaborate by what the lecturer probably meant? What is the connection between real analysis and complex analysis?
I understand that there are two different types of analyticity: real analytic and complex analytic. Are they connected?
Best Answer
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help.
Real Power Series:
The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form $$ f(x) = \sum_{k=0}^{\infty}a_k x^k. $$ They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the "$x$" in $f(x)$) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$, we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$th Fibonacci number, then the radius of convergence ends up being $1/\phi$, where $\phi = (1+\sqrt{5})/2$ is the "golden ratio".
The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series. These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$. The Taylor series corresponding to $f(x)$, centered at $a$, is given by $$ \sum_{k=0}^{\infty}\frac{f^{(k)}(a)}{k!}(x-a)^k. $$ A Maclaurin series is just a Taylor series where $a=0$. After playing around with the series a bit, you may notice a few things about it.
You will then find that this new power series converges on a slightly larger radius than 1 (that's the $\sqrt{2}$ mentioned above), and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$.
The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series.
This is where complex analysis beings to come into play...
Complex Power Series:
Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy: $$ f(z) = \sum_{k=0}^{\infty}c_k z^k. $$ The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface.
For starters, let me define some new vocabulary terms.
Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent:
This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above:
These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point.
Laurent Series:
A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant.
Let $f(z)$ be differentiable at some point $z=w$. Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first "singularity" -- a point where $f(z)$ fails to be differentiable.
Take the complex version of our previous example, $$ f(z) = \frac{1}{1+z^2}. $$
This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$, then $f(z)$ is differentiable/smooth/analytic at all values $w\neq \pm i$. This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=\pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$. This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $\pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk.
So now should come the question: when exactly does a complex function fail to be differentiable?
Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs:
Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere. This is in stark contrast to the real-valued version $f(x) = |x|$, which is differentiable everywhere except at $x=0$ where there is a "corner".
Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course.
Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate.
Consider the previous example of $f(z) = (1+z^2)^{-1}$. Again, since the denominator can factor into the product $(z+i)(z-i)$, then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$. In other words if $$ g(z) = (z+i)f(z) = \frac{z-i}{1+z^2}, $$ then $g(z) = z+i$ for all $z\neq i$, and $\lim_{z\to i}g(z) = 2i$ exists, and is no longer a singularity.
Similarly, if $f(z) = 1/(1+z^2)^3$, then we again have singularities at $z=\pm i$. This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$. Instead, we would need to multiply by three copies to get $$ g(z) = (z-i)^3 f(z) = \frac{(z-i)^3}{(1+z^2)^3}, $$ which again means that $g(z) = (z+i)^3$ for all $z\neq i$, and that $\lim_{z\to i}g(z) = (2i)^3 = -8i$ exists.
Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles. The order of the pole is the minimum number of liner terms need to remove the singularity.
The real-valued Maclaurin series for $\sin x$ is given by $$ \sin x = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!}x^{2k+1}, $$ and has a infinite radius of convergence. This means that the complex version $$ \sin z = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!}z^{2k+1} = z - \frac{1}{3!}z^3 + \frac{1}{5!}z^5 - \cdots $$ also has an infinite range of convergence (such functions are called entire) and hence no singularities. From here it's easy to see that the function $(\sin z)/z$ is analytic as well, with Taylor series $$ \frac{\sin z}{z} = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!}z^{2k} = 1 - \frac{1}{3!}z^2 + \frac{1}{5!}z^4 - \cdots $$
However, a function like $(\sin z)/z^3$ is not analytic at $z=0$, since dividing $\sin z$ by $z^3$ would give us the following expression: $$ \frac{\sin z}{z^3} = \frac{1}{z^2} - \frac{1}{3!} + \frac{1}{5!}z^2 - \cdots $$
But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series $$ \frac{\sin z}{z^3} - \frac{1}{z^2} = \frac{\sin z - z}{z^3} = \frac{1}{3!} + \frac{1}{5!}z^2 - \frac{1}{7!}z^4 + \cdots $$
This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series. A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive: $$ f(z) := \sum_{k = -\infty}^{\infty} c_k z^k $$
In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points.
A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(\sin z)/z^3$. If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity. An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$: \begin{align} \sin(1/z) &= \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}(1/z)^k \\ &= \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}z^{-k} \\ &= \sum_{k=-\infty}^{0}\frac{(-1)^{-k}}{(1-2k)!}z^{k} \end{align}
These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^{-1/x^2}$ was so pathological. The Taylor series for $e^z$ is given by $$ e^z = \sum_{k=0}^{\infty}\frac{1}{k!}z^k $$ and so \begin{align} e^{-1/z^2} &= \sum_{k=0}^{\infty}\frac{1}{k!}(-1/z^2)^k \\ &= \sum_{k=0}^{\infty}\frac{1}{k!}(-1)^k z^{-2k} \\ &= \sum_{k=-\infty}^{0}\frac{1}{(-k)!}(-1)^{-k} z^{2k}. \end{align}
Hence there is an essential singularity at $z=0$, and so even though the real-valued version is smooth at $x=0$, there is no hope of differentiability in a disk around $z=0$.