I am currently trying to solve a quantum mechanics problem in which i need to prove that $e^A e^{-A} = 1$ where $A$ is an operator and the exponent function is defined by a taylor series. However, I am having trouble trying to prove this fact using taylor series because I dont think it would be valid to just add the exponents and get 1. I started by trying to prove $e^x e^{-x} = 1$ using taylor expansion and not exponent properties. However I am stuck at $\sum_{n=0}^{\infty} \sum_{m=0}^{\infty}\frac{x^{m+n}}{m!n!} (-1)^m$. How to proceed from here?
Proof using taylor series
abstract-algebralinear algebraquantum mechanicstaylor expansion
Related Solutions
Let $f(x)=\arcsin(1-x)$ for $x\in[0,2]$.
Since the derivative of $f(x)=O\left( x^{-1/2}\right)$ for $x\sim 0$, we let $t=x^{1/2}$ and $g(t)=\arcsin(1-t^2)$.
We will now develop the first few terms of the Taylor series for $g(t)$ around $t=0$.
We have for the first derivative $g^{(1)}(t)$
$$\begin{align} g^{(1)}(t)&=-\frac{2t}{\sqrt{1-(1-t^2)^2}}\\\\ &=-\frac{2}{\sqrt{2-t^2}}\tag 1 \end{align}$$
Differentiating the right-hand side of $(1)$, we obtain the second derivative, $g^{(2)}(t)$
$$\begin{align} g^{(2)}(t)&=-\frac{2t}{(2-t^2)^{3/2}}\tag 2 \end{align}$$
Continuing, we have for $g^{(3)}(t)$
$$\begin{align} g^{(3)}(t)&=-\frac{4(t^2+1)}{(2-t^2)^{5/2}}\tag 3 \end{align}$$
And finally, we have for $g^{(4)}(t)$
$$\begin{align} g^{(4)}(t)&=-\frac{12t(t^2+3)}{(2-t^2)^{7/2}}\tag 4 \end{align}$$
We evaluate $(1)-(4)$ at $t=0$ and form the expansion
$$\bbox[5px,border:2px solid #C0A000]{\arcsin(1-x)=\frac{\pi}{2}-\sqrt{2}x^{1/2}-\frac{\sqrt{2}}{12}x^{3/2}+O\left(x^{5/2}\right)}$$
Indeed, neither the exponential series nor the limit definition of the exponential $e^{tA}=\lim_{n\to\infty}(1+\frac{t}nA)^n$ makes any sense for unbounded operators. However, if $A$ is the generator of a strongly continuous one-parameter semigroup (e.g., a self-adjoint Hilbert space operator) the "exponential" can be evaluated via the Post-Widder Inversion Formula: $$ e^{tA}=\underset{n\to\infty}{\operatorname{s-lim}}\Big({\bf 1}-\frac{t}nA\Big)^{-n}=\underset{n\to\infty}{\operatorname{s-lim}}\Big(\Big({\bf 1}-\frac{t}nA\Big)^{-1}\Big)^n $$ Here $({\bf 1}-\frac{t}nA)^{-1}$ is defined via the resolvent formalism, and $\operatorname{s-lim}$ is the limit in the strong operator topology, cf., e.g., Corollary 5.5 in Chapter III of Engel & Nagel's, One-Parameter Semigroups for Linear Evolution Equations (2000).
It turns out that the resolvent is the key to characterizing pertubations of (unbounded) generators. From here on out I'll be a bit sloppy with notation and assumptions---in an attempt to get the concept across a bit better.
Theorem. Given a sequence $\{(A_n(t))_{t\geq 0}:n\in\mathbb N\}$ as well as $(A(t))_{t\geq 0}$ of generators of quasi-bounded semigroups ($\|(A+z)^{-k}\|\leq M(z-\beta)^{-k}$), if $\operatorname{s-lim}_{n\to\infty}(A_n-z{\bf 1})^{-1}=(A-z{\bf 1})^{-1}$ for some $z$ with $\operatorname{Re}z>\beta$, then $$ \underset{n\to\infty}{\operatorname{s-lim}}\;e^{-tA_n}=e^{-tA} $$
This is Theorem 2.16 in Chapter 9 of Kato's Perturbation Theory for Linear Operators (1980). As an application (Example 2.22 in Ch.9) Kato shows that given $H$ self-adjoint and $V$ symmetric and $H$-bounded, for all $\varepsilon$ from a sufficiently small neighbourhood of zero one obtains a Taylor(-esque) expansion $$ e^{it(H+\varepsilon V)} u = e^{itH} u + \varepsilon u^{(1)} (t) + \varepsilon^2 u^{(2)}(t) + o(\varepsilon^2) $$ where $u^{(1)}(t)=-\int_0^te^{i(t-s)(H+\varepsilon V)}iVe^{is(H+\varepsilon V)}u\,ds$ (and $u^{(2)}$ similarly).
Best Answer
Actually the holomorphic functional calculus guarantees that there's no trouble with just adding the exponents, at least as long as $A$ is a bounded operator; more generally if $f$ is any holomorphic function we can make sense of $f(A)$ for $A$ a bounded operator (by applying the power series), and we have $f(A) g(A) = h(A)$ where $f(z) g(z) = h(z)$ as holomorphic functions.
Without using this we can argue as follows, again assuming $A$ is bounded. Consider
$$f(t) = e^{tA} e^{-tA}$$
where $t \in \mathbb{R}$ is a real parameter. It's not hard to show that functions from $\mathbb{R}$ to a Banach algebra satisfy all the usual calculus properties such as the product rule (being careful about noncommutativity) and so forth, and it's not hard to show using the power series definition that $\frac{d}{dt} e^{tA} = A e^{tA}$ (and this property, together with the initial condition $e^0 = 1$, uniquely characterizes $e^{tA}$), so
$$\frac{df}{dt} = A e^{tA} e^{-tA} + e^{tA} (-A) e^{-tA} = 0$$
so $f$ is a constant, and $f(0) = 1$ gives $f(t) = 1$ identically. This argument can be generalized to show that $e^A e^B = e^{A+B}$ whenever $A, B$ commute, by considering the derivative of $e^{tA} e^{tB} e^{-t(A+B)}$.
If you really want to do this just using power series, the identity you want is equivalent to proving that for every $k$ we have
$$\sum_{m+n=k} {k \choose m} (-1)^m = \begin{cases} 0 \text{ if } k \ge 1 \\ 1 \text{ if } k = 0 \end{cases}$$
which is an easy combinatorial identity, following for example from inclusion-exclusion, or from the binomial theorem applied to $(1 - 1)^k$.