During my quantum mechanics lectures and in literature I sometimes hear that "the wave function, $\Psi$, contains *all* information of the system". This has made me feel rather puzzled so I hope you have some good explanations of what this really means. I know that the wave function describes a *state* and if we are talking about electrons this state is usually a probability of finding the particle at a certain position. How is this *all* the information of the system? How can we even know that it contains all information? Could it be true that the wave function in fact describes the probabilities of all variables related to the system, but we usually talk about position and momentum? Can the uncertainty principle be valid for other variables than just position and momentum?

# [Physics] How to the wave function contain all information of a system

quantum mechanicswavefunction

#### Related Solutions

Although the concept of state can be well defined, at some level it takes a certain level of abstraction to really understand what a state is. From a conceptual point of view, it is easier to think of a state in a classical context. In a classical context a state is simply a particular configuration of objects that are used to describe a system. For instance, in the case of a light switch we can talk about it being in an on or off state (e.g. the light switch can be in the "on state" or the "off state"). In quantum mechanics this situation is a little more complicated, because we add a level of abstraction that allows us to consider the possibility of the superposed states where our knowledge of the switch is insufficient and we must consider it to be in an "on and off" state. However, this state is not a classical state in the sense that we could ever observe the switch in the "on and off" state, it is a quantum state that exists in an abstract space called Hilbert space.

Every state of a system is represent by a ray (or vector) in Hilbert space. Hilbert space is probably most simply understood by creating a basis that spans the space (e.g. that is sufficient to describe every point in the space) as a long summation of complex variables, which represent independent functions. Any state, or ray in the Hilbert space, can then be understood using Dirac's bra - ket notation.

The ket is more commonly used and a state is represented as $|\psi\rangle$. It is important to understand that the symbol inside the ket ($\psi$) is an arbitrary label, although there are commonly accepted labels that are used throughout physics, in general the label can be anything a person wants it to be.

In the case of considering the a state be projected onto some basis, we can write this mathematically as: $$|\psi\rangle = \sum_i |i\rangle\langle i|\psi\rangle$$ In this representation the $\langle i|\psi\rangle$ takes on the role of a set of complex coefficients $c_i$where $|i\rangle$ serves to represent the each of the $i$ basis states.

In the early development of quantum mechanics, the question of describing atoms and predicting their properties was the main goal. Many of the questions physicists were interested in centered around questions of energy, position and momentum transitions. Because of this fact, most of quantum descriptions of reality are centered around finding a means of representing energy and momentum states of particles, particularly electrons, surrounding the nucleus. The quantum mechanical description of electrons surrounding an atom is therefore focused on describing the probabilities of finding an electron in a particular orbital state surrounding the atom. The state vector is thus used to represent a ray in Hilbert space that encodes the probability amplitude (essentially the square root of a probability, which is understood to be a complex number) of finding an electron in a particular orbital state (e.g. position, momentum, spin).

This is an example of applying quantum mechanics to help resolve a particular physical problem. I make this distinction, because quantum mechanics is simply a means to an end, and thus must be understood as a tool to be used to describe a particular physical situation and to predict certain physical outcomes as the system evolves. One of the core debates of the 20th century centered around whether quantum mechanics could provide a complete description of the universe. The answer to this question is yes, and has been affirmed in repeated experiments.

I will answer this part

In addition, we know that the Hamiltonian represents the sum of kinetic and potential energy in a system.However, I'm not quite sure why, intuitively, the time dependent version of the Schrodinger equation becomes Hψ=iℏ ∂/∂t ψ(r,t).

Quantum mechanics was developed slowly, because experiments showed that light came in quanta from the hydrogen atom. At that time they were still thinking classically, and Bohr developed a model of an electron rotating around a proton for the hydrogen atom, similar to the way the moon rotates around the earth. BUT there was a problem for this. In classical electricity and magnetism the electron would not stay in an orbit but would lose energy and fall on the proton.

Bohr *postulated* that it was a standing wave, and *postulated* only certain orbits ; electrons could fall from one to the other emitting a photon of energy hnu ( nu the frequency). That the photon's energy came as hnu was known from the photoelectric effect and from black body radiation. The model then explained the spectrum of hydrogen which had been fitted with a series.

This is how the h enters the game. Because the model has to take into account that an electron changing energy levels will release from the system energy proportional to h*nu.

The Schrodinger equation gives the same series as a solution to the hydrogen problem, but now it comes as a theory which is much more general. h necessarily has to play its role.

Quantum mechanics has a number of postulates.

Associated with any particle moving in a conservative field of force is a wave function which determines everything that can be known about the system.

With every physical observable q there is associated an operator Q, which when operating upon the wavefunction associated with a definite value of that observable will yield that value times the wavefunction.

Any operator Q associated with a physically measurable property q will be Hermitian.

The set of eigenfunctions of operator Q will form a complete set of linearly independent functions.

For a system described by a given wavefunction, the expectation value of any property q can be found by performing the expectation value integral with respect to that wavefunction.

The time evolution of the wavefunction is given by the time dependent Schrodinger equation.

Number 2) of these postulates is what relates to your question.

where does the iℏ come from? why does the sum of kinetic and potential energy equal to that?

The h bar comes so that the dimensions and the energy of the photon comes out correctly as h*nu. The complex "i" so that the equation has the form that will give the appropriate solutions.

The operator for the time dependent Hamiltonian is $i\hbar$$\partial$/$\partial$$t$

So the $H$$\psi$=$i\hbar$ $\partial$/$\partial$$t$ is an identity, used to solve for a time dependent psi:

The formalism developed by trial and error in the beginning, fitting the models to the data and then using the models to predict further behaviors.The successful fitting of the same spectral series as the Bohr model led to the development of quantum mechanics, rather than the theory coming first and then looking at the data.

The real answer is that this mathematical formulation fits the data and has great predictive power proven over and over again.

## Best Answer

We make it include all the information. For instance if the particle has spin we give the wavefunction information to describe the spin in addition to enough information to tells us the relative frequency of different position and enough information to tell as the relative frequency of every possible measurement.

You have a function that is operated on by an operator. The operator's eigenvalues are the reported results that are coupled to the measurement device and the relative squared norm of the projection onto the eigenspaces associated with the eigenvalue is the relative frequency of getting that result. The state is left in a renormalized version of its projection onto the eigenspace.

Since the operators are self adjoint and hermitian the eigenspaces ate orthogonal so the sum of the squared norms of the orthogonal projections into the eigenspaces is one. And repeated measurements give the same result.

This tells you the results and the frequencies of the different results. That's everything. And the states are the things acted on by the operators, so the states tell you everything, they tell you which operators give which results (the eigenvalues) and with what frequency (ratio of square of projection and square of original) and what happens (the state is projected).

The state did this for every operator since the operators act on the space of states.

Some people turn it all around, start with the operators and then define a state as something that acts on every operator. And sometimes people want to include mixed states too.

If some operator and state couldn't go together that would be a problem.

If you wanted a theory that predicts the individual results then you'd want more (but it isn't clear how to get better than the frequencies since you don't have enough information to get better than statistical information).

Yes and no. It isn't just a probability (or even a probability). The sizes of the projections tell you the frequency of getting various outcomes. But since you end up projecting, you also end up changing the state. This can affect later results. So it's not just a sample space and some random variables acting on it, that would not explain that the order you look at different things matters.

Yes. It is. For instance here is a description (by me) of the uncertainty principle for any two observables.