Is it possible to "construct" the Hamiltonian of a system if its ground state wave function (or functional) is known? I understand one should not expect this to be generically true since the Hamiltonian contains more information (the full spectrum) than a single state vector. But are there any special cases where it's possible to obtain the Hamiltonian? Some examples would be really helpful.

# [Physics] Is it possible to reconstruct the Hamiltonian from knowledge of its ground state wave function

ground-statehamiltonianhilbert-spacequantum mechanicswavefunction

#### Related Solutions

Just "plug into the equation" is always a bad idea. So here is a short overview:

- Given a Hamiltonian, the possible energy levels correspond to the eigenvalues of the Hamiltonian (no "plugging in" needed). More precisely, we have $H|\psi\rangle=E|\psi\rangle$ for every eigenvector.
- Given a normalized eigenvector, you can find the probability by $\langle \psi |H|\psi \rangle$, otherwise you have to normalize (divide by $\langle \psi|\psi \rangle$).
- In the density matrix formalism, this means that given a state $\rho$ (positive semidefinite matrix with trace one - otherwise normalize the trace) the probability is given by $\operatorname{tr}(\rho H)$ gives you the probability.

So the question is: what do you mean "I am not given a traditional wavefunction to normalize"?

EDIT: To me, it seems that you are given a perfectly reasonable wave function (in matrix formulation, though). An electron that sits just at site $j$ will have corresponding wave function $|e_j\rangle$, where $e_j$ denotes the $j$-th basis vector (i.e. $|e_j\rangle=(0,\ldots, 0,1,0,\ldots 0)^T$ with the $1$ at position j. Following your assignment, this tells you the wave function of your particle looks like: $$ |a\rangle=\sum_{i=1}^N a|e_i\rangle$$ where $a$ is a complex number and $N|a|^2=1$ for normalization.

In order to find the probabilities, you can now either compute the eigenvectors of $H$ and then decompose $|a\rangle$ in terms of these eigenvectors, or you can compute the spectral decomposition of $H$, i.e. the eigenvalues $\lambda_i$ and projectors $P_i$ such that $H=\sum_{i=1}^n \lambda_i P_i$ and compute $\langle a|P_i|a\rangle$ to obtain the probability of measuring $\lambda_i$.

How is thisĀ allĀ the information of the system?

We make it include all the information. For instance if the particle has spin we give the wavefunction information to describe the spin in addition to enough information to tells us the relative frequency of different position and enough information to tell as the relative frequency of every possible measurement.

You have a function that is operated on by an operator. The operator's eigenvalues are the reported results that are coupled to the measurement device and the relative squared norm of the projection onto the eigenspaces associated with the eigenvalue is the relative frequency of getting that result. The state is left in a renormalized version of its projection onto the eigenspace.

Since the operators are self adjoint and hermitian the eigenspaces ate orthogonal so the sum of the squared norms of the orthogonal projections into the eigenspaces is one. And repeated measurements give the same result.

This tells you the results and the frequencies of the different results. That's everything. And the states are the things acted on by the operators, so the states tell you everything, they tell you which operators give which results (the eigenvalues) and with what frequency (ratio of square of projection and square of original) and what happens (the state is projected).

The state did this for every operator since the operators act on the space of states.

Some people turn it all around, start with the operators and then define a state as something that acts on every operator. And sometimes people want to include mixed states too.

If some operator and state couldn't go together that would be a problem.

How can we even know that it contains all information?

If you wanted a theory that predicts the individual results then you'd want more (but it isn't clear how to get better than the frequencies since you don't have enough information to get better than statistical information).

Could it be true that the wave function in fact describes the probabilities of all variables related to the system, but we usually talk about position and momentum?

Yes and no. It isn't just a probability (or even a probability). The sizes of the projections tell you the frequency of getting various outcomes. But since you end up projecting, you also end up changing the state. This can affect later results. So it's not just a sample space and some random variables acting on it, that would not explain that the order you look at different things matters.

Can the uncertainty principle be valid for other variables than just position and momentum?

Yes. It is. For instance here is a description (by me) of the uncertainty principle for any two observables.

## Best Answer

IFyou know that your Hamiltonian is of the form $$ \hat H=\frac{-\hbar^2}{2m}\nabla^2+V(\mathbf r) \tag 1 $$ for a single massive, spinless particle, then yes, you can reconstruct the potential and from it the Hamiltonian, up to a few constants, given any eigenstate. To be more specific, the ground state $\Psi_0(\mathbf r)$ obeys $$ \hat H\Psi_0(\mathbf r) =\frac{-\hbar^2}{2m}\nabla^2\Psi_0(\mathbf r)+V(\mathbf r)\Psi_0(\mathbf r) =E_0 \Psi_0(\mathbf r), $$ which means that if you know $\Psi_0(\mathbf r)$ then you can calculate its Laplacian to get $$ \frac{ \nabla^2 \Psi_0(\mathbf r) }{ \Psi_0(\mathbf r) } = \frac{2m}{\hbar^2}\left(V(\mathbf r)-E_0\right). $$ If you know the particle's mass, then you can recover $V(\mathbf r)-E_0$, and this is all you really need (since adding a constant to the Hamiltonian does not change the physics).However, it's important to note that this procedure guarantees that your initial $\Psi_0$ will be an eigenstate of the resulting hamiltonian, but it does not preclude the possibility that $\hat H$ will admit a separate ground state with lower energy. As a very clear example of that, if $\Psi_0$ is a 1D function with a node, then (because 1D ground states have no nodes) you are guaranteed a unique $V(x)$ such that $\Psi_0$ is an eigenstate, but it will never be the ground state.

If you don't know that your Hamiltonian has that structure, there is (in the general case) no information at all that you can extract about the Hamiltonian from just the ground state.

As a simple example, without staying too far from our initial Hamiltonian in $(1)$, consider that Hamiltonian in polar coordinates, $$\hat H=\frac{-\hbar^2}{2m}\left(\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} + \frac{1}{\hbar ^2r^2}L^2\right)+V(r),$$ where I'm assuming $V(\mathbf r)=V(r)$ is spherically symmetric, and encapsulating the angular dependence into the total angular momentum operator $L^2$.

Suppose, then, that I give you its ground state, and that it is an eigenstate of $L^2$ with eigenvalue zero (like e.g. the ground state of the hydrogenic Hamiltonian). How do you tell if the Hamiltonian that created it is $H$ or a similar version, $$\hat H{}'=\frac{-\hbar^2}{2m}\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} +V(r),$$ with no angular momentum component? Both versions will have $\Psi_0$ as a ground state (though here $\hat H'$ will have a wild degeneracy on every eigenspace, to be fair). Carrying on with this thought, what about $$\hat H{}''=\frac{-\hbar^2}{2m}\left(\frac{1}{r^2}\frac{\partial}{\partial r} r^2\frac{\partial}{\partial r} + f(r)L^2\right)+V(r),$$ where I've introduced an arbitrary real function $f(r)$ behind the angular momentum? This won't affect the $\ell=0$ states, but it will take the rest of the spectrum to who knows where. (In fact, you can even tack on an arbitrary function of $L_x$, $L_y$ and $L_z$, while you're at it.)

A bit more generally,

anyself-adjoint operator which vanishes on $\left|\Psi_0\right>$ can be added to the Hamiltonian to get you an operator that has $\left|\Psi_0\right>$ as an eigenstate. As a simple construction, given any self-adjoint operator $\hat A$, the combination $$\hat H {}''' = E_0 \left|\Psi_0\right>\left<\Psi_0\right| + \left(\mathbf 1 - \left|\Psi_0\right>\left<\Psi_0\right| \right) \hat A \left(\mathbf 1 - \left|\Psi_0\right>\left<\Psi_0\right| \right) $$ (where the factors in brackets are there to modify $\hat A$ into vanishing at $\left|\Psi_0\right>$ and its conjugate) will always have $\left|\Psi_0\right>$ as an eigenstate.Even if you know

allthe eigenstates, it's still not enough information to reconstruct the Hamiltonian, because they do not allow you to distinguish between, say, $\hat H$ and $\hat{H}{}^2$. On the other hand, if you know all the eigenstates and their eigenvalues, then you can simply use the spectral decomposition to reconstruct the Hamiltonian.In general, if you really insist, there is probably a trade-off between what you know about the Hamiltonian's structure (e.g. "of the form $\nabla^2+V$" versus no information at all) and how many of the eigenstates and eigenvalues you need to fully reconstruct it (a single pair versus the whole thing), particularly if you allow for approximate reconstructions. Depending on where you put one slider, you'll get a different reading on the other one.

However, unless you have a specific problem to solve (like reconstructing a Hamiltonian of vaguely known form from a specific set of finite experimental data) then it's definitely not worth it to explore the details of this continuum of trade-offs beyond the knowledge that it exists and the extremes I noted above.