# Quantum Mechanics – What Do Orbitals in Atoms with Multiple Electrons Mean?

atomic-physicsatomsorbitalsquantum mechanics

I am wondering about this. It is a familiar sight, the orbital diagrams for the hydrogen atom, depictions of which are abundant and so are not in need of reproduction here.

However, what about the "orbitals" for bigger, more complex atoms, say, Helium? I inquired about this on another forum but only one answer was given and it was less than illuminating.

It is said that the Schrodinger equation for these more complex atoms are not solvable exactly or in closed form. However we shouldn't need an exact, closed form solution to draw the orbitals, as drawings are going to be approximate anyways, not to mention that "Newtonian" QM can only take us so far anyways before we need to get into relativistic and QED (relativistic field theory) stuff. Why don't they exist?

If we go and examine that Schrodinger equation for the next atom up after hydrogen — helium, we get the following Hamiltonian operator (with a simplified, nailed-in-place nucleus and this was copied off Wikipedia w/appropriately added constants but looks right given it has all the right kinetic and potential energy terms):

$$\hat{H} = -\frac{\hbar^2}{2m_e} \nabla^2_{r_1} – \frac{\hbar^2}{2m_e} \nabla^2_{r_2} – \frac{2e^2}{4\pi \epsilon_0 r_1} – \frac{2e^2}{4\pi \epsilon_0 r_2} + \frac{e^2}{4\pi \epsilon_0 r_{12}}$$

and the trouble comes from the cross-term — the very last term above. This goes into the Schrödinger equation,

$$\hat{H} \psi(\mathbf{r_1}, \mathbf{r_2}) = E \psi(\mathbf{r_1}, \mathbf{r_2}),$$

which shows that the state (wave) function $$\psi$$ is a six-dimensional function. Now this is weird — "orbitals" are distributions in three dimensions. Does this mean the orbitals of the Helium atom are actually six-dimensional objects? What then does it mean to categorize them in the usual ways — e.g. this is "$$1s$$", for bigger atoms we have a "$$1s$$" and "$$2s$$", "$$2p$$", etc.? What is the meaning of using these "hydrogen-like" names? How do we know the above solutions which we cannot even solve for exactly have the requisite amount of quantum numbers $$-$$ e.g. 6 quantum numbers, in this case ($$n$$, $$l$$, $$m$$ for each electron)? How do we know they don't have even more? Or do they? Is this the reason orbital diagrams don't exist, because the orbitals are actually 6-dimensional (although the electrons occupy a 3-dimensional physical space, of course, but the probability distribution has 6 dimensions)? What do the "hydrogen" names mean in light of this fact?

Also, even if the $$\psi$$-functions cannot be written out in closed form, couldn't we write them by, say, adding up 6-dimensional spherical harmonics in some kind of nasty looking infinite series expansion that we can carry out for a few terms and get an approximate solution? If not, why not?

To recap and concentrate my inquiry from the above, the specific questions I am after are:

1. What is the reason there are no drawings of the helium orbitals, or lithium, or any higher atom? Because they are $$6$$-, $$9$$-, …, dimensional, because they cannot be solved exactly, both, something else?

2. What is the real meaning of the "$$1s$$", "$$2s$$", "$$2p$$", etc., notation in multielectron atoms in light of the increasing dimensionality of the wavefunctions of bigger and badder atoms (one dimension gained for every electron added to the system)? How do we know it is even meaningful in this context?

You're right on a lot of counts. The wavefunction of the system is indeed a function of the form $$\Psi=\Psi(\mathbf r_1,\mathbf r_2),$$ and there's no separating the two, because of the cross term in the Schrödinger equation. This means that it is fundamentally impossible to ask for things like "the probability amplitude for electron 1", because that depends on the position of electron 2. So at least a priori you're in a huge pickle.

The way we solve this is, to a large extent, to try to pretend that this isn't an issue - and somewhat surprisingly, it tends to work! For example, it would be really nice if the electronic dynamics were just completely decoupled from each other: $$\Psi(\mathbf r_1,\mathbf r_2)=\psi_1(\mathbf r_1)\psi_2(\mathbf r_2),$$ so you could have legitimate (independent) probability amplitudes for the position of each of the electrons, and so on. In practice this is not quite possible because the electron indistinguishability requires you to use an antisymmetric wavefunction: $$\Psi(\mathbf r_1,\mathbf r_2)=\frac{\psi_1(\mathbf r_1)\psi_2(\mathbf r_2)-\psi_2(\mathbf r_1)\psi_1(\mathbf r_2)}{\sqrt{2}}. \tag1$$ Suppose that the eigenfunction was actually of this form. What could you do to obtain this eigenstate? As a fist go, you can solve the independent hydrogenic problems and pretend that you're done, but you're missing the electron-electron repulsion. You could solve the hydrogenic problem for electron 1 and then put in its charge density for electron 2 and solve its single electron Schrödinger equation, but then you'd need to go back to electron 1 with your $\psi_2$. You can then try and repeat this procedure for a long time and see if you get something sensible.

Alternatively, you could try reasonable guesses for $\psi_1$ and $\psi_2$ with some variable parameters, and then try and find the minimum of $⟨\Psi|H|\Psi⟩$ over those parameters, in the hope that this minimum will get you relatively close to the ground state.

These, and similar, are the core of the Hartree-Fock methods. They make the fundamental assumption that the electronic wavefunction is as separable as it can be - a single Slater determinant, as in equation $(1)$ - and try to make that work as well as possible. Somewhat surprisingly, perhaps, this can be really quite close for many intents and purposes. (In other situations, of course, it can fail catastrophically!)

In reality, of course, there's a lot more to take into account. For one, Hartree-Fock approximations generally don't account for 'electron correlation' which is a fuzzy term but essentially refers to terms of the form $⟨\psi_1\otimes\psi_2| r_{12}^{-1} |\psi_2\otimes\psi_1⟩$. More importantly, there is no guarantee that the system will be in a single configuration (i.e. a single Slater determinant), and in general your eigenstate could be a nontrivial superposition of many different configurations. This is a particular worry in molecules, but it's also required for a quantitatively correct description of atoms.

If you want to go down that route, it's called quantum chemistry, and it is a huge field. In general, the name of the game is to find a basis of one-electron orbitals which will be nice to work with, and then get to work intensively by numerically diagonalizing the many-electron hamiltonian in that basis, with a multitude of methods to deal with multi-configuration effects. As the size of the basis increases (and potentially as you increase the 'amount of correlation' you include), the eigenstates / eigenenergies should converge to the true values.

Having said that, configurations like $(1)$ are still very useful ingredients of quantitative descriptions, and in general each eigenstate will be dominated by a single configuration. This is the sort of thing we mean when we say things like

the lithium ground state has two electrons in the 1s shell and one in the 2s shell

which more practically says that there exist wavefunctions $\psi_{1s}$ and $\psi_{2s}$ such that (once you account for spin) the corresponding Slater determinant is a good approximation to the true eigenstate. This is what makes the shells and the hydrogenic-style orbitals useful in a many-electron setting.

However, a word to the wise: orbitals are completely fictional concepts. That is, they are unphysical and they are completely inaccessible to any possible measurement. (Instead, it is only the full $N$-electron wavefunction that is available to experiment.)

To see this, consider the state $(1)$ and transform it by substituting the wavefunctions $\psi_j$ by $\psi_1\pm\psi_2$:

\begin{align} \Psi'(\mathbf r_1,\mathbf r_2) &=\frac{\psi_1'(\mathbf r_1)\psi_2'(\mathbf r_2)-\psi_2'(\mathbf r_1)\psi_1'(\mathbf r_2)}{\sqrt{2}} \\&=\frac{ (\psi_1(\mathbf r_1)-\psi_2(\mathbf r_1))(\psi_1(\mathbf r_2)+\psi_2(\mathbf r_2)) -(\psi_1(\mathbf r_1)+\psi_2(\mathbf r_1))(\psi_1(\mathbf r_2)-\psi_2(\mathbf r_2)) }{2\sqrt{2}} \\&=\frac{\psi_1(\mathbf r_1)\psi_2(\mathbf r_2)-\psi_2(\mathbf r_1)\psi_1(\mathbf r_2)}{\sqrt{2}} \\&=\Psi(\mathbf r_1,\mathbf r_2). \end{align}

That is, the Slater determinant that comes from linear combinations of the $\psi_j$ is indistinguishable from the one you get from the $\psi_j$ themselves. This extends to any basis change on that subspace with unit determinant; for more details see this thread. The implication is that labels like s, p, d, f, and so on are useful to describe the basis functions that we use to build the dominating configuration in a state, but they cannot be reliably inferred from the many-electron wavefunction itself. (This is as opposed to term symbols, which describe the global angular momentum characteristics of the eigenstate, and which can indeed be obtained from the many-electron eigenfunction.)