[Physics] Energy Eigenfunction Completeness

hilbert-spacequantum mechanicsschroedinger equationwavefunction

It's my understanding that eigenfunctions are complete (span the space). I don't know what the solution to the (Time-Dependent) Schrödinger Equation is, but whatever it is, any solution (no matter the potential $V$) can be expanded in terms of say position eigenfunctions or momentum eigenfunctions. I'd like to emphasize the phrase – no matter the potential $V$ – with doubt because this ties into my question. Energy eigenfunctions can also be used to represent a general solution. However, this is where my question begins:

Consider a set of energy eigenfunctions $\psi_n$ which satisfy by definition $\hat{H}\psi_n = E_n\psi_n$. It seems to me that the sum $\Psi = \sum c_n\psi_ne^{-iE_nt/\hbar}$ is a general solution to the Schrödinger equation only when the potential $V$ of the Schrödinger equation matches the potential $V$ in the Time-Independent Schrödinger Equation used to find the $\psi_n$'s. Is this correct? If it is, could one say that energy eigenfunctions are complete only with respect to the specific potential $V$ from which they are derived while (say) momentum eigenfunctions are complete with respect to any potential. If this is not true, if energy eigenfunctions of the Time-Independent Schrödinger equation $\hat{H}\psi_n = E_n\psi_n$ are complete with respect to any potential $V$ used in the (Time-Dependent) Schrödinger equation, why can't we use the $\psi_n$'s of say the infinite square well to construct general solutions $\Psi$ of the delta-function well, finite potential well, free particle, etc. Why are we always solving the time-independent Schrödinger equation when we can just use the energy eigenfunctions of the infinite square well?

Best Answer

Before I begin let me pause to observe that there is a slight lie in the following words (and, more or less, in the entire undegraduate curriculum) that one discovers when one gets very mathematically involved; right now it's present in this Wikipedia article as "subtleties of the unbounded case"... our "Hermitian" operators are not in general well-defined for all the states that we'd like. This becomes especially important as we look at position and momentum eigenstates -- very often states get unnormalizable and physics can get somewhat clunky in terms of these.

With that caveat, yes: the eigenfunctions of any given Hamiltonian are always a complete basis for the entire space. For example one can approach any 1D Hamiltonian with the eigenfunctions of the harmonic oscillator; those are valid wavefunctions which span the space.

Whether this is useful or not is a different story. Let's say you have a bunch of functions $\hat H_1 |\psi_n\rangle = E_n |\psi_n\rangle$ but then you bring them to a new Hamiltonian $\hat H_2$. In general $|\psi_n\rangle$ is no longer going to be an eigenvector for $\hat H_2,$ and therefore its evolution under that new Schrödinger equation is not going to be $|\psi_n(t)\rangle = e^{-iE_n t/\hbar} |\psi_n(0)\rangle,$ so these energies and wavefunctions are not obviously helpful in this new context.

Well, there is a way to make them useful, but of course it only really does a good job when $\hat H_1$ and $\hat H_2$ have some sort of nice relationship. A Schrödinger equation $i \hbar \partial_t |\Psi\rangle = \hat H |\Psi\rangle$ can be phrased purely as a unitary operator $|\Psi(t)\rangle = \hat U(t) |\Psi_0\rangle.$ The condition is that $i\hbar \partial_t \hat U = \hat H \hat U,$ but this is no problem in theory. This means that all of our expectation values in the second case take the form $$A(t) = \langle \Psi(t)|\hat A |\Psi(t)\rangle = \langle \Psi_0|\hat U_2^\dagger \hat A \hat U_2|\Psi_0\rangle.$$ Now since a unitary operator is defined by $U^\dagger U = 1$ we will insert strategic $\hat U_1^\dagger \hat U_1$ terms to rewrite this same expectation value as $$A(t) = \langle \Psi_0|\hat U_1^\dagger ~\Big(\hat U_1 \hat U_2^\dagger \hat A \hat U_2 \hat U_1^\dagger\Big)\hat U_1 |\Psi_0\rangle.$$Note that now there is some complicated time dependence for this parenthesized operator $\tilde A = \hat U_1 \hat U_2^\dagger \hat A \hat U_2 \hat U_1^\dagger$, but the outermost wavefunctions obey the Schrödinger equation for $\hat H_1$, not $\hat H_2$. The cost is that we have to shift our operators to have this complicated time dependence $\tilde A(t)$, which takes the form of a really big product rule,$$\begin{align} i\hbar\partial_t \tilde A = &H_1 U_1 U^\dagger_2 \hat A U_2 U_1^\dagger - U_1 U^\dagger_2 H_2 \hat A U_2 U_1^\dagger + i\hbar\tilde{\dot A} + \\ &U_1 U^\dagger_2 \hat A H_2 U_2 U_1^\dagger - U_1 U^\dagger_2 \hat A U_2 U_1^\dagger H_1. \end{align}$$(The minus sign for the daggers comes from taking the conjugate transpose of the earlier Schrodinger equation.)

Going "whole hog" with this requires replacing those $\hat A$ operators with $U_2 U_1^\dagger \tilde A U_1 U_2^\dagger$ which yields: $$\begin{align} i\hbar\partial_t \tilde A = &H_1 \tilde A - U_1 U^\dagger_2 H_2 U_2 U_1^\dagger \tilde A + i\hbar\tilde{\dot A} + \\ &\tilde A U_1 U_2^\dagger H_2 U_2 U_1^\dagger - \tilde A H_1. \end{align}$$We see that the only complicated thing that's left is that we also need $\tilde H_2$ to factor into these expressions, rather than the $t=0$ value of $H_2$. Once that's done we find just$$i\hbar\partial_t \tilde A = [H_1 - \tilde H_2, \tilde A] + i\hbar\tilde{\dot A}.$$ This is called an "interaction picture" because usually what we do is we use some easy-to-solve orthogonal states $\hat H_1 = H_0$ and then add some interaction term which couples them, $\hat H_2 = H_0 + V.$ The equation for a time-independent $\tilde H_2$ is just $$i\hbar\partial_t \tilde H_2 = [H_1 - \tilde H_2, \tilde H_2] = [H_1, \tilde H_2],$$and in many cases this just slaps some phase factors around the terms in $V$. Then, instead of calculating all-new basis states we can use the ones that are most familiar, instead preferring to find differential equations for the observables that we're interested in and solving them in some limits.

Related Question