Are separable solutions to the Schrödinger equation always complete

hilbert-spacequantum mechanicsschroedinger equationsuperpositionwavefunction

I'm just starting to learn quantum mechanics, and the book I'm reading (Griffiths) states that every solution to the Schrödinger equation can be written as a linear combination of the separable solutions:
$$
\Psi(x,t) = \sum_{n=1}^{\infty}c_n\psi_n(x)e^{-iE_nt/\hbar}.
$$

However, it does not provide a proof that the set of $\psi_n(x)$ is a complete basis, even for arbitrary $V(x)$. I understand that completeness can be proven in specific cases such as the infinite square well, simple harmonic oscillator, et cetera, by solving for the separable solutions first. I also know that if $V$ is not time-independent the whole separation of variables scheme falls apart. My question is, do the separable solutions always form a complete basis, even for arbitrary $V(x)$? If so, what does the proof look like?

Best Answer

This result is called the spectral theorem. For a finite-dimensional Hilbert space $\mathscr H$, the statement is that given any self-adjoint$^\ddagger$ operator $H$, there exists an orthonormal basis $\{\hat e_i\}$ consisting of eigenvectors of $H$, and that all of the corresponding eigenvalues are real.

The proof of this statement goes as follows.

  1. By the fundamental theorem of algebra, $\mathrm{det}(H-\lambda \mathbb I)=0$ has at least one solution - say, $\lambda_1$. This implies that there exists at least one non-zero vector $\hat e_1$ (which we normalize for convenience) such that $(H-\lambda_1 \mathbb I) \hat e_1 = 0 \iff H\hat e_1 = \lambda_1 \hat e_1.$
  2. Because $H$ is self-adjoint, we have $$\lambda_1 = \langle \hat e_1,H\hat e_1\rangle = \langle H \hat e_1 ,\hat e_1 \rangle = \overline{\lambda_1} \implies \lambda_1\in \mathbb R$$
  3. Let $\{\hat e_1\}^\perp$ denote the orthogonal complement of $\hat e_1$ - that is, the set of all vectors $v\in\mathscr H$ such that $\langle \hat e_1,v\rangle = 0$. Because $H$ is self-adjoint, we have that $$\langle \hat e_1,Hv\rangle = \langle H\hat e_1,v\rangle = \lambda_1 \langle \hat e_1,v\rangle = 0 \implies Hv \in \{\hat e_1\}^\perp$$ We say that $\{\hat e_1\}^\perp$ is invariant under the action of $H$. As a result, if we let $\hat e_1$ be the first element of our orthonormal basis, then $H$ takes the form $$H = \pmatrix{\lambda_1 & \matrix{0 &\cdots&0}\\\matrix{0\\\vdots\\ 0} & H'}$$ where $H'$ is an $(n-1)\times (n-1)$-dimensional self-adjoint matrix. This process can be repeated for $H'$ and so on, eventually yielding a diagonal matrix with real entries and the claimed basis of eigenvectors.

For an infinite-dimensional Hilbert space $\mathcal H$, this situation becomes more complicated because the spectrum $\sigma$ of an arbitrary operator can consist of discrete points (called the point spectrum, $\sigma_p$) as well as a continuum (called the continuous spectrum, $\sigma_c$).

If the spectrum of $H$ is pure point (so $\sigma_c = \emptyset$), then the proof is similar in spirit to the finite-dimensional case, but there are technicalities which come into play if $H$ is not bounded; nevertheless, the conclusion is the same except for the fact that the basis in question does not have a finite number of elements. If the spectrum of $H$ contains a continuous part, then even more technicalities arise, and the full machinery of functional analysis is required; in physics, this operationally corresponds to the appearance of non-normalizable (or generalized) eigenstates, such as the ones which appear for the free particle Hamiltonian $H:= \frac{\hat P^2}{2m}$.


$^\ddagger$It's easy to show that if $H\neq H^\dagger$ but $[H,H^\dagger]=0$, then $H=A + i B$ where $A,B$ are commuting self-adjoint operators. This allows us to generalize this proof to so-called normal operators, and the only thing that changes is that the spectrum of $H$ may be complex.