[Physics] The concept of Fermi level

fermi-energyfermionssemiconductor-physicssolid-state-physicsstatistical mechanics

I have read and used the concept of Fermi level previously and this is simply an attempt on my side to better my understanding of it by asking myself various questions. I understand that the fermi distribution is defined as
$$f(E)=\frac{1}{1+e^{\frac{E-E_F}{kT}}},$$ hence the Fermi Energy is defined as $f(E_F)=0.5$.

However, this definition does not define the fermi level to be a constant value and I understand that it's a material property. For example in crystals, due to covalent or ionic bonds that constitute the monocrystalline material, the atomic wavefunctions which make up the atoms within the crystals, face degeneracy, and this leads to the formation of bands (very closely spaced states) within the crystal. Here the Fermi level is defined as the brim or the edge to which the electrons fill up these states at zero kelvin.

Obviously, since they are continuous bands (or a continuum of energy states with very little energy gaps between consecutive states), defining the Fermi level up to the last state within the band or the next unfilled state at zero kelvin would yield very little error, but what about semiconductors where the entire valence band is filled up until zero kelvin and then starts a considerable gap where the Fermi level resides, or individual atoms (for example an Si atom, where the difference between the 3p state and 4s states are considerable)?

How do we pin out a defined value of $E_F$ in those circumstances? Or even molecules like $C_{60}$ where we have discrete states. The difference between the HOMO and the LUMO levels is much more conspicuous in this case. How do we etch out a Fermi level value in these cases?

Best Answer

So I don't know whether any of the following 3 sections is going to be review or not, so I have just typed them out as a sort of complete review of what we're talking about -- the Fermi energy is one contributor to the chemical potential and once you know that, you are prepared to deal with semiconductors.

What is chemical potential?

Let's take a step back into thermodynamics: there exists a system with internal energy $U$, volume $V$, entropy $S$, and made up of particles with numbers $N_i$, and let's suppose that these are all of the macroscopic quantities that we care about. There is therefore some equation-of-state which relates these and allows us to understand all of these in one go,$$dU = T~dS - P~dV + \sum_i \mu_i~dN_i,$$ where we have introduced some other variables that we give very familiar names -- the "temperature" $T$, the "pressure" $P$, and this slightly unfamiliar name of the "chemical potential for particle species $i$", $\mu_i.$ One can "justify" these names as well, given the following argument: in isolation systems try to maximize their entropy; assuming that two systems $A$ and $B$ reversibly share volume, energy, and particles, with the constraint that e.g. $dU^A + dU^B = 0$, then we find that the total change in entropy is $$\begin{align}dS = dS^A + dS^B = \left(\frac{1}{T^A} - \frac{1}{T^B}\right)dU^A + \left(\frac{P^A}{T^A} - \frac{P^B}{T^B}\right) dV^A\\-\sum_i\left(\frac{\mu_i^A}{T^A} - \frac{\mu_i^B}{T^B}\right) dN_i^A.\end{align}$$ Energy will spontaneously flow into $A$ (in other words, $dU^A > 0$) if the entropy change is positive which happens if $T^B > T^A.$ At the same temperature, volume will spontaneously flow into $A$ if $P^A > P^B.$ And finally, again at the same temperature, particles will spontaneously flow into $A$ if $\mu_i^B > \mu_i^A.$

We can also see from the above definition, the pivotal idea that $\mu_i$ is the energy to add one more particle of species $i$, if we hold entropy and volume constant. (Equivalently we could trade out $N_i$ for $n_i$ and have a molar chemical potential, if you'd prefer.)

What is the Fermi surface? What does it have to do with chemical potential?

Recall that pressure lumps together any arbitrary effects that have the capacity to do work when you change the volume of the object -- "external-facing" kinetic energy and repulsive intermolecular forces and what have you. The actual causes of pressure can be very complicated; pressure itself is very simple: "to squeeze this requires a certain energy."

The same is true of chemical potential. It lumps together potentially a lot of different causes which ultimately make adding an electron (or an electron-hole or whatever else you've got) cost energy.

Here's the easiest way to shift a chemical potential for electrons: put a voltage on it. The new chemical potential consists of the energy to get an electron from 0V to the target voltage and then the residual energy to get it into the system.

But internal kinetic energy has its part to play, too. You have some crystal lattice in $\vec r$-space which has some dual lattice in $\vec k$-space, and each of those lattice points is available for exactly two conduction electrons with opposite spins. If you have $N$ conduction electrons then we have to fill $N/2$ reciprocal-lattice sites, and we would expect to fill them lowest-energy-first. Well the mean field effect of this periodic potential can be pretty much entirely dealt with by rescaling the mass of the electron into an "effective mass" $m_e$, so there are tables of these which you can look up in various lattices. All that's left is the kinetic energy which goes like $\hbar^2 k^2/(2 m_e),$ and taking the $N/2$ minimum energy states therefore amounts to filling up a ball in $k$-space with some radius $k_\text F$, the Fermi wavenumber. Every "Fermi"-anything has to do with this ball. So for example the Fermi energy is just the energy to add an electron to the surface of this ball, $\hbar^2 k_\text F^2/(2 m_e).$ Or if you hear someone talk about a "Fermi wavelength" you might think, "oh, I bet $k_\text F = 2\pi/\lambda_\text F$" and you'll probably be right.

So: Pauli exclusion forces the chemical potential of the electrons in a metal to be nonzero even when the crystal is electrically perfectly neutral, because any new electron must be added with a certain kinetic energy, because there simply are no more available conduction states with $\|\vec k\| < k_\text F.$ This set of circumstances is referred to as the Fermi energy.

How does the Fermi distribution come into all of this?

The Fermi distribution is just a slight shifting of the above picture for nonzero temperature. The above suggests that all of the states with $\|\vec k\| < k_\text F$ are occupied and all of the other states are unoccupied, but in a real metal with some temperature $T$ there will be vibrations of the crystal lattice -- phonons -- that are scattering off of electrons and at first they can only kick the electrons into some $\vec k$ such that $\|\vec k\| > k_\text F$ but after some of these happen there are "holes" with $\|\vec k\| < k_\text F$ left behind, until the two come to some sort of steady state distribution.

Actually the laws of statistical mechanics are already enough to perfectly specify what happens in the thermal equilibrium. We return back to the above expression and assume that one of the systems, $A$, is very large, and that volumes $V$ are not changing, but that the smaller system $B$ can exchange electrons and energy with the bigger one. Well the bigger one has an entropy which we can just Taylor-expand to first order as $$S^A(E, N) = S_0 + \Delta E/T - \Delta N\mu/T.$$ Assuming that the other system is more or less irrelevant and using Boltzmann's equation $S = k_\text B~\ln \Omega,$ one finds that the probability $P$ of a state of the smaller system, which is proportional to the multiplicity $\Omega$ of it, is now seen to be proportional to $\exp((\Delta E - \Delta N \mu)/(k_\text B~T)).$ Of course these are the $\Delta E$ and $\Delta N$ for the bigger system, so to correct that we insert an overall minus sign, and we have the famous "Boltzmann factors".

Now any little individual dual-lattice point (with a particular spin) has to be in exactly one of two states, either $N=0$ and $E=0$ (Boltzmann factor =1) or we have $N=1, E=E$, so the Boltzmann factor is $\exp(-(E-\mu)/k_\text B T)$. Its probability of being filled at equilibrium is therefore exactly just, $$ f_{\mu, T}(E) = \frac{e^{-(E-\mu)/k_\text B T}}{1 + e^{-(E-\mu)/k_\text B T}} = \frac{1}{e^{+(E-\mu)/k_\text B T} + 1}.$$

Of course this is only for one state; in practice one also has a "density of states" function and they must both be multiplied together to get the total number of electrons available at a certain energy level. But now we finally come to your conclusion: the chemical potential has a secondary understanding for fermions as the exact energy such that the probability that a fermion-state with that energy is occupied, is 1/2.

Now if we connect a big reservoir of electrons at some voltage to some small wire, electrons will flow into or out of that wire giving it a slight charge, and the voltage in the wire will change disproportionately to the amount of charge that flowed, coming to the same voltage as our voltage source. This is also in some ways all about the wire coming to the same chemical potential as the source we hook it up to. You have a choice: you can either define the new chemical potential as a new "Fermi surface", and thus make the two terms equivalent (and now the Fermi energy is not a material property, as I can change it by changing the voltage on the object), or you can define the Fermi surface to be the chemical potential when the thing is totally charge-neutral and has zero voltage relative to infinity -- in which case it's a material property, but usually we need to think about chemical potentials, which often bump the Fermi level up or down by some voltage.

I'm going to choose the second, because redundant terminology is perhaps not a great idea.

How does this answer your question?

Well, for C-60, let's just ask the definitions: What is the chemical potential of some nanosystem? It's the energy you get from adding an electron, therefore it is the LUMO energy minus the HOMO energy. (For people who are casually reading this and are not familiar with the jargon: this is "highest-occupied molecular orbital," "lowest unoccupied molecular orbital.") In some sense it really is that simple.

But for semiconductors we come up with something of a more clear problem. We again have a similar sort of "lots of electrons in a crystal lattice", but there is this band-gap $\Delta$ where the density of states falls to 0, and so it becomes very hard to say "oh, these are occupied with probability such-and-so." The above definition of chemical potential actually says that we're at a sort of place where chemical potential is discontinuous; in the valence band the cost to add new electrons was $E$ and now it has jumped discontinuously to $E+\Delta.$ So that sucks, what can we do to "pick a number" in between to represent its Fermi energy when it is neutral?

Well, these band gaps in practice are on the scale of electron volts or so, whereas room temperature is closer to $\text{26 meV}$, so the regime $k_\text B T\ll E$ holds to something like one and a half orders of magnitude. But there's still occasionally going to be some thermal electrons in the conduction band and some thermal holes in the valence band. Let's use these probabilities to find a meaningful "Fermi level" in the middle, $E < \mu < E+\Delta.$

Let's start with the conduction electrons; the large band gap lets us approximate that $f_{\mu,T}(E+\Delta)\approx e^{-(E+\Delta-\mu)/k_\text B T}.$ Simple, right? For the valence electrons, we can see that we need to look at the number of holes to get a similar number, $$1 - f_{\mu, T}(E) = 1/(1 + e^{-(E-\mu)/k_\text B T} \approx e^{(E - \mu)/k_\text B T}.$$ These two probabilities, when multiplied, combine together to form a constant, $e^{-\Delta/k_\text B T}$. Again we actually have to multiply by the density of states above and below the band gap to get actual numbers, but one still finds that $N_e~N_h = N_i^2$ for some constant $N_i$ known as the "intrinsic carrier density," this is no matter where the Fermi level is within. This is called the "mass action law".

So what we do in practice with semiconductors is to measure the number of electrons and holes at some temperature and this difference lets us solve for $\mu$, which we take to be the "intrinsic Fermi energy." Usually this happens to be close to the mid-gap energy $E+(\Delta/2),$ but as we use doping we can shift where exactly it is relative to those two edges.

Then when we connect two of these systems together, e.g. in a PN-junction, we think of their Fermi energies as being the same (since it's just a chemical potential) and therefore their bands are shifted relative to each other.

Related Question