There are three answers posted but so far nobody has posted the obvious physical interpretation. The energy eigenstates all have the particle spread out within the box, stationary in time. If you want the particle to bounce back and forth between the walls of the box, then you do this by combining eigenstates. The simplest case is just to mix the ground with the first excited state. If you look carefully at the resulting function, you should see that it bounces back and forth between the left and right hand sides of the box.

In this example, the wave function still isn't very precisely localized at any instant, but if you want to do better you just add more eigenstates.

But how can we guarantee that two solutions $\boldsymbol {\psi_1}$ and $\boldsymbol {\psi_2}$ to the time-dependent equation don't have $\boldsymbol {\psi_1(x,0)} = \boldsymbol {\psi_2(x,0)}$. If we can't guarantee this, then how do we know that the solution found by Griffith's method is unique?

I interpret that your question basically asks how do we know that the time-dependent Schrödinger equation with an initial condition has a unique solution. To show this, we use a common method for showing the uniqueness of a solution to a linear differential equation: we show that if two answers are the same at one point, then their difference is constant $0$.

The time-dependent Schrödinger equation is linear; that means that if $\Psi_1$ and $\Psi_2$ are solutions to the time-dependent Schrödinger equation, then so is $\alpha_1 \Psi_1 + \alpha_2 \Psi_2$ for any $\alpha_1, \alpha_2$. In particular, $\Psi=\Psi_1-\Psi_2$ is a solution. So we have
$$
-i\hbar \partial_t \Psi = \hat{H} \Psi,\quad\Psi(0)=0
$$
Now we get
$$
\begin{align*}
\partial_t \langle\Psi, \Psi\rangle
&= \langle\partial_t \Psi, \Psi\rangle + \langle\Psi, \partial_t \Psi\rangle \\
&= \langle \frac{i}{\hbar} \hat{H} \Psi, \Psi \rangle + \langle \Psi, \frac{i}{\hbar} \hat{H} \Psi\rangle \\
= & -\frac{i}{\hbar}\langle\hat{H}\Psi,\Psi\rangle + \frac{i}{\hbar} \langle \Psi, \hat{H} \Psi\rangle \\
&= -\frac{i}{\hbar}\langle\Psi,\hat{H}\Psi\rangle + \frac{i}{\hbar} \langle \Psi, \hat{H} \Psi\rangle \\
&= 0,
\end{align*}
$$
where the next-to-last equation is due to the fact that the Hamiltonian is Hermitian. So now we have a differential equation for the *real function* $f$ defined by $f(t) \equiv \langle\Psi(t),\Psi(t)\rangle$,
$$
\partial_t f = 0 \, .
$$
Zero derivative means that $f(t)$ is a constant. Therefore,
$$f(t)=f(0)=\left<\Psi(0),\Psi(0)\right>=0 \, .$$
Therefore, $\Psi=0$, so $\Psi_1=\Psi_2$.

## Best Answer

These orthogonal states are energy eigenstates. Every measurable quantity provides an orthogonal basis of eigenstates. The physical meaning of their orthogonality is that, when energy (in this example) is measured while the system is in one such state, it has no chance of instead being found to be in another. Thus a general state's probability of being observed in state $n$ upon making such a measurement is $c_n^\ast c_n$.

A similar analysis for two consecutive measurements, be they of the sme observable or different observables, can be used to derive the probability distribution for the second measurement's result. This requires understanding the state's time-dependence between measurements. The energy eigenstates' probability distribution doesn't change over time, as the $c_n$ are simply multiplied by the unit complex number $\exp -\frac{iE_n t}{\hbar}$ over a time $t$.