Why do the Fermi level for electrons and holes coincide in equilibrium condition and why do they separate as quasi-Fermi levels in non equilibrium situations?
[Physics] Fermi level in equilibrium and non-equilibrium situations
semiconductor-physics
Related Solutions
I believe that the authors, of the reference you provided, explain the reason behind introducing these so-called quasi-Fermi levels at the end of section 4.3.3. For simplicity, let me just repeat it here; perhaps explaining it in different words, with a little more elaboration, would help. I’m sure you're aware of the fact that in $n$- ($p$-) doped semiconductors the Fermi level is closer to the conduction (valence) band as opposed to being close to the middle of the band gap. To be more mathematically precise, the Fermi level of $n$- ($E_{F,e}$) and $p$-doped ($E_{F,h}$) semiconductors is given by $$E_{F,e}=E_{F,i}+k_{B}T\ln\left(\frac{n}{n_{i}}\right)$$ and $$E_{F,h}=E_{F,i}-k_{B}T\ln\left(\frac{p}{n_{i}}\right)$$ respectively. The quantities $E_{F,i}$, $k_B$, $T$, and $n_i$ are the intrinsic Fermi level, Boltzmann constant, temperature, and intrinsic carrier concentration respectively. The quantities $n\approx N_{D}-N_{A}$ and $p\approx N_{A}-N_{D}$ where $N_A$ and $N_D$ are the acceptor and donor concentrations.
Now, when you have a steady light source irradiating your semiconductor sample, you create a large number (compared to thermal excitations) of Electron-Hole Pairs (EHPs). You could think of it as doping of the sample with equal number of electrons and holes; let us call this “photo-doping.” In this photo-doping picture it is much easier to think of two separate quasi-Fermi levels for electrons and holes. You can notice that the above two equations are nothing but a rearrangement of equation (4-15) from the reference you provided. The reason they are called “quasi”-Fermi levels is because the Fermi level is only defined in equilibrium. But in this particular case, steady-state could just be treated as a different type of equilibrium.
To answer your question as to why we don’t have two quasi-Fermi levels in equilibrium I would say: in principle you could define quasi-Fermi levels even when we have EHPs due to thermal excitations. However, the typical concentrations of thermally excited EHPs are so small that it can be incorporated into the broadening of the Fermi-Dirac distribution function. In the mathematical sense, you could also incorporate the case of photo-excitation (or photo-doping) into the broadening of the Fermi-Dirac distribution as well. But then you run into the physically counterintuitive problem of defining what temperature means. First of all, a sample irradiated with light is not under thermal equilibrium. Secondly, since temperature, in the conventional sense, is associated with thermal energy of the system, it makes sense to only incorporate thermally generated EHPs into the broadening of the Fermi-Dirac distribution.
In your particular example, we have an $n$-doped system. Consequently, the concentrations $n_0 = 10^{14}$ cm$^{-3}$ and $p_0 = 2.25 \times 10^{6}$ cm$^{-3}$ exist under thermal equilibrium. When (say) you shine light on it, we have $n \approx p = 2 \times 10^{13}$ cm$^{-3}$ due to photo-excitations. In this particular example, the quasi-Fermi level of electrons is not that far away from the Fermi level in equilibrium. The quasi-Fermi level of the holes, however, is significantly away from the Fermi level at equilibrium. This is a direct consequence of the fact that the concentration of minority carriers jumps by a large amount. Now, if we were to define separate Fermi-Dirac distribution functions for electrons and holes, under this steady-state condition, and we computed the respective Fermi-Dirac integrals using the density of states of the conduction and valence bands, then we would obtain the correct electrons and hole concentrations (i.e. $2 \times 10^{13}$ cm$^{-3}$). This is another way of justifying two separate quasi-Fermi levels.
It makes sense why the separation between the quasi-Fermi levels would be proportional to the rate of EHP generation due to photo-excitation. As a result, the separation between the quasi-Fermi levels, just as the authors say, is a measure of how much the system is out of equilibrium. So, in summary, I believe that the best way to get an intuition for quasi-Fermi levels is by considering the photo-doping picture. Although the authors don’t mention this explicitly, I have a hunch that the concept of quasi-Fermi levels was inspired from conventional doping (using donor or acceptor atoms).
So I don't know whether any of the following 3 sections is going to be review or not, so I have just typed them out as a sort of complete review of what we're talking about -- the Fermi energy is one contributor to the chemical potential and once you know that, you are prepared to deal with semiconductors.
What is chemical potential?
Let's take a step back into thermodynamics: there exists a system with internal energy $U$, volume $V$, entropy $S$, and made up of particles with numbers $N_i$, and let's suppose that these are all of the macroscopic quantities that we care about. There is therefore some equation-of-state which relates these and allows us to understand all of these in one go,$$dU = T~dS - P~dV + \sum_i \mu_i~dN_i,$$ where we have introduced some other variables that we give very familiar names -- the "temperature" $T$, the "pressure" $P$, and this slightly unfamiliar name of the "chemical potential for particle species $i$", $\mu_i.$ One can "justify" these names as well, given the following argument: in isolation systems try to maximize their entropy; assuming that two systems $A$ and $B$ reversibly share volume, energy, and particles, with the constraint that e.g. $dU^A + dU^B = 0$, then we find that the total change in entropy is $$\begin{align}dS = dS^A + dS^B = \left(\frac{1}{T^A} - \frac{1}{T^B}\right)dU^A + \left(\frac{P^A}{T^A} - \frac{P^B}{T^B}\right) dV^A\\-\sum_i\left(\frac{\mu_i^A}{T^A} - \frac{\mu_i^B}{T^B}\right) dN_i^A.\end{align}$$ Energy will spontaneously flow into $A$ (in other words, $dU^A > 0$) if the entropy change is positive which happens if $T^B > T^A.$ At the same temperature, volume will spontaneously flow into $A$ if $P^A > P^B.$ And finally, again at the same temperature, particles will spontaneously flow into $A$ if $\mu_i^B > \mu_i^A.$
We can also see from the above definition, the pivotal idea that $\mu_i$ is the energy to add one more particle of species $i$, if we hold entropy and volume constant. (Equivalently we could trade out $N_i$ for $n_i$ and have a molar chemical potential, if you'd prefer.)
What is the Fermi surface? What does it have to do with chemical potential?
Recall that pressure lumps together any arbitrary effects that have the capacity to do work when you change the volume of the object -- "external-facing" kinetic energy and repulsive intermolecular forces and what have you. The actual causes of pressure can be very complicated; pressure itself is very simple: "to squeeze this requires a certain energy."
The same is true of chemical potential. It lumps together potentially a lot of different causes which ultimately make adding an electron (or an electron-hole or whatever else you've got) cost energy.
Here's the easiest way to shift a chemical potential for electrons: put a voltage on it. The new chemical potential consists of the energy to get an electron from 0V to the target voltage and then the residual energy to get it into the system.
But internal kinetic energy has its part to play, too. You have some crystal lattice in $\vec r$-space which has some dual lattice in $\vec k$-space, and each of those lattice points is available for exactly two conduction electrons with opposite spins. If you have $N$ conduction electrons then we have to fill $N/2$ reciprocal-lattice sites, and we would expect to fill them lowest-energy-first. Well the mean field effect of this periodic potential can be pretty much entirely dealt with by rescaling the mass of the electron into an "effective mass" $m_e$, so there are tables of these which you can look up in various lattices. All that's left is the kinetic energy which goes like $\hbar^2 k^2/(2 m_e),$ and taking the $N/2$ minimum energy states therefore amounts to filling up a ball in $k$-space with some radius $k_\text F$, the Fermi wavenumber. Every "Fermi"-anything has to do with this ball. So for example the Fermi energy is just the energy to add an electron to the surface of this ball, $\hbar^2 k_\text F^2/(2 m_e).$ Or if you hear someone talk about a "Fermi wavelength" you might think, "oh, I bet $k_\text F = 2\pi/\lambda_\text F$" and you'll probably be right.
So: Pauli exclusion forces the chemical potential of the electrons in a metal to be nonzero even when the crystal is electrically perfectly neutral, because any new electron must be added with a certain kinetic energy, because there simply are no more available conduction states with $\|\vec k\| < k_\text F.$ This set of circumstances is referred to as the Fermi energy.
How does the Fermi distribution come into all of this?
The Fermi distribution is just a slight shifting of the above picture for nonzero temperature. The above suggests that all of the states with $\|\vec k\| < k_\text F$ are occupied and all of the other states are unoccupied, but in a real metal with some temperature $T$ there will be vibrations of the crystal lattice -- phonons -- that are scattering off of electrons and at first they can only kick the electrons into some $\vec k$ such that $\|\vec k\| > k_\text F$ but after some of these happen there are "holes" with $\|\vec k\| < k_\text F$ left behind, until the two come to some sort of steady state distribution.
Actually the laws of statistical mechanics are already enough to perfectly specify what happens in the thermal equilibrium. We return back to the above expression and assume that one of the systems, $A$, is very large, and that volumes $V$ are not changing, but that the smaller system $B$ can exchange electrons and energy with the bigger one. Well the bigger one has an entropy which we can just Taylor-expand to first order as $$S^A(E, N) = S_0 + \Delta E/T - \Delta N\mu/T.$$ Assuming that the other system is more or less irrelevant and using Boltzmann's equation $S = k_\text B~\ln \Omega,$ one finds that the probability $P$ of a state of the smaller system, which is proportional to the multiplicity $\Omega$ of it, is now seen to be proportional to $\exp((\Delta E - \Delta N \mu)/(k_\text B~T)).$ Of course these are the $\Delta E$ and $\Delta N$ for the bigger system, so to correct that we insert an overall minus sign, and we have the famous "Boltzmann factors".
Now any little individual dual-lattice point (with a particular spin) has to be in exactly one of two states, either $N=0$ and $E=0$ (Boltzmann factor =1) or we have $N=1, E=E$, so the Boltzmann factor is $\exp(-(E-\mu)/k_\text B T)$. Its probability of being filled at equilibrium is therefore exactly just, $$ f_{\mu, T}(E) = \frac{e^{-(E-\mu)/k_\text B T}}{1 + e^{-(E-\mu)/k_\text B T}} = \frac{1}{e^{+(E-\mu)/k_\text B T} + 1}.$$
Of course this is only for one state; in practice one also has a "density of states" function and they must both be multiplied together to get the total number of electrons available at a certain energy level. But now we finally come to your conclusion: the chemical potential has a secondary understanding for fermions as the exact energy such that the probability that a fermion-state with that energy is occupied, is 1/2.
Now if we connect a big reservoir of electrons at some voltage to some small wire, electrons will flow into or out of that wire giving it a slight charge, and the voltage in the wire will change disproportionately to the amount of charge that flowed, coming to the same voltage as our voltage source. This is also in some ways all about the wire coming to the same chemical potential as the source we hook it up to. You have a choice: you can either define the new chemical potential as a new "Fermi surface", and thus make the two terms equivalent (and now the Fermi energy is not a material property, as I can change it by changing the voltage on the object), or you can define the Fermi surface to be the chemical potential when the thing is totally charge-neutral and has zero voltage relative to infinity -- in which case it's a material property, but usually we need to think about chemical potentials, which often bump the Fermi level up or down by some voltage.
I'm going to choose the second, because redundant terminology is perhaps not a great idea.
How does this answer your question?
Well, for C-60, let's just ask the definitions: What is the chemical potential of some nanosystem? It's the energy you get from adding an electron, therefore it is the LUMO energy minus the HOMO energy. (For people who are casually reading this and are not familiar with the jargon: this is "highest-occupied molecular orbital," "lowest unoccupied molecular orbital.") In some sense it really is that simple.
But for semiconductors we come up with something of a more clear problem. We again have a similar sort of "lots of electrons in a crystal lattice", but there is this band-gap $\Delta$ where the density of states falls to 0, and so it becomes very hard to say "oh, these are occupied with probability such-and-so." The above definition of chemical potential actually says that we're at a sort of place where chemical potential is discontinuous; in the valence band the cost to add new electrons was $E$ and now it has jumped discontinuously to $E+\Delta.$ So that sucks, what can we do to "pick a number" in between to represent its Fermi energy when it is neutral?
Well, these band gaps in practice are on the scale of electron volts or so, whereas room temperature is closer to $\text{26 meV}$, so the regime $k_\text B T\ll E$ holds to something like one and a half orders of magnitude. But there's still occasionally going to be some thermal electrons in the conduction band and some thermal holes in the valence band. Let's use these probabilities to find a meaningful "Fermi level" in the middle, $E < \mu < E+\Delta.$
Let's start with the conduction electrons; the large band gap lets us approximate that $f_{\mu,T}(E+\Delta)\approx e^{-(E+\Delta-\mu)/k_\text B T}.$ Simple, right? For the valence electrons, we can see that we need to look at the number of holes to get a similar number, $$1 - f_{\mu, T}(E) = 1/(1 + e^{-(E-\mu)/k_\text B T} \approx e^{(E - \mu)/k_\text B T}.$$ These two probabilities, when multiplied, combine together to form a constant, $e^{-\Delta/k_\text B T}$. Again we actually have to multiply by the density of states above and below the band gap to get actual numbers, but one still finds that $N_e~N_h = N_i^2$ for some constant $N_i$ known as the "intrinsic carrier density," this is no matter where the Fermi level is within. This is called the "mass action law".
So what we do in practice with semiconductors is to measure the number of electrons and holes at some temperature and this difference lets us solve for $\mu$, which we take to be the "intrinsic Fermi energy." Usually this happens to be close to the mid-gap energy $E+(\Delta/2),$ but as we use doping we can shift where exactly it is relative to those two edges.
Then when we connect two of these systems together, e.g. in a PN-junction, we think of their Fermi energies as being the same (since it's just a chemical potential) and therefore their bands are shifted relative to each other.
Best Answer
A steady-state equilibrium, different from the thermodynamic equilibrium, can be triggered by external stimulus such as light-shining, which provoke the photoionization and the generation of electron-hole pairs, or current flowing, which injects electrons (or holes) to the system. Under these conditions, the concentrations of electron and holes are no longer governed harmoniously by the mass-action law and the Fermi-Boltzmann thermodynamic equilibrium, but are forced by the external conditions, and pulled-off from their reciprocal equilibrium. By this, the need to separate the two distinct quasi-fermi levels, one for electrons, one for holes, accounting for their out-of-thermodynamic-equilibrium concentrations.