In Bohr's theory the smallest possible orbital angular momentum is $\hbar$. The measured value is $0$. On the other hand the picture developed by solving the (time independent) Schödinger equation reproduces the energy levels from Bohr's model and gets the minimum angular momentum and the angular momentum step size right (it also fives you the quantization of the projections of the angular momentum). Add Pauli exclusion to the Schroödinger picture and you can get the shell filling rules and explain why the periodic table has the structure that it does which is another thing that Bohr's atom couldn't do correctly.
Bohr is out because it makes incorrect prediction.
As pointed out in a comment by mmesser314, there is a video by the mathematician Grant Sanderson.
The more general uncertainty principle, beyond quantum
Grant Sanderson puts the HUP in the context of properties of wave propagation.
In the case of a continuous, consistent oscillation the corresponding propagating wave has a specific frequency, but that propagating wave doesn't have a location; it's continuous.
It is possible to produce a burst of oscillation in such a way that it gives rise to a propagating "blip". A fourier analysis of the waveform of that blip describes it as a superposition of a range of frequencies. In the case of that blip: the location of it can be tracked through time with specifity. But the blip does not have a particular frequency; the blip is spread out in frequency space.
Grant Sanderson describes that with wave propagation in general (not just in the context of quantum mechanics) there is an inherent trade-off. You can push for a very specific frequency, but at the cost of specifity of position-as-a-function-of-time. You can push for high specifity of position-as-a-function-of-time, but at the cost of introducing spread of spectrum.
In any device that produces propagating waves the design can be made so that the emitted wave is wherever you want in that trade-off.
Fourier analysis facilitates expressing the trade-off in mathematical form.
Stating it explicitly:
The view in terms of wave propagation in general does not need to invoke concepts such as "observation", "measurement", "decoherence".
Best Answer
It’s a mistake to invoke relativity on the speed $v$ without also using the relativistic momentum $p$. But if this is a text for students who may be a little weak on relativity, it’s perhaps a pedagogically useful mistake. Most students come into a class knowing that the speed of light is a “speed limit,” even if they have only a vague idea of what happens as an object approaches that limit. Introducing all of the concepts you need to make this argument in a consistent way might be a lot to expect for the target audience of this text.
Physicists like to think in energy units. A correct version of this argument might be to replace
$$ \Delta x \Delta p \gtrsim \hbar $$
with
$$ \Delta x \cdot c\Delta p \gtrsim \hbar c \approx 200\rm\,MeV\,fm $$
(We won’t need to fuss about factors of two.) We know experimentally that an electron can be removed from an atom with an energy of a few eV. We suspect that the electron is held near the nucleus by electrical attraction, which is subject to the virial theorem, so its kinetic energy $T$ and its binding energy $U$ have the same magnitude, $T \approx -U$ (neglecting a factor of two). The uncertainty principle says that any particle confined to a nucleus, $\Delta x \lesssim 1\rm\,fm$, must have a momentum uncertainty $c\Delta p \gtrsim 200\rm\,MeV$. For an electron, mass $m_e c^2 \approx \frac12\rm\,MeV$, this momentum is hyperrelativistic, and the corresponding kinetic energy is $T\approx E\approx pc$. But if the electron were bound in a potential well with depth 200 MeV, you couldn’t ionize an atom with an eV-scale photon. Something’s got to give.
Note that if we make the same argument for a nucleon trapped in a nucleus, with mass $m_N c^2 \approx 1000\rm\,MeV$, we can mostly get away with using the nonrelativistic kinetic energy:
$$ T \approx \frac{p^2}{2m_N} = \frac{(cp)^2}{2m_N c^2} \approx 20\rm\,MeV $$
Actual nucleon-separation energies tend to be around 10 MeV, so if we assume $T\approx |U|$ this isn’t a bad guess at all. It’s reasonable to say that the size of the nucleus is directly related to the energies involved in the nuclear interaction by the uncertainty principle. And for that matter, the size of the atom is directly related to the scale of electron binding energies:
\begin{align} T_e &= \frac{p^2}{2m_e} \approx 10\rm\,eV \\ (pc)^2 &\approx 2 m_e c^2 T_e \approx 10^7\rm\,eV^2 \\ \Delta x &\approx \frac{\hbar c}{pc} \approx \frac{200\rm\,eV\,nm}{3\times10^3\rm\,eV} \approx \frac23 Å \end{align}
Of course, when you actually find a wavefunction, you don’t use the uncertainty principle: you use the wave mechanics of the Schrödinger equation. But the uncertainty principle is fundamentally a statement about how waves behave mathematically, so in factor-of-two land the result is fine. And if you leave factor-of-two land and compute carefully, you find that the uncertainties in position and momentum associated with an actual wavefunction are always larger than the minimum set by the uncertainty principle.
As for your specific complaints:
This just isn’t so. Instead of a single atom, imagine an ensemble of 100 atoms, all fixed in place. If the electrons don’t leave their atoms, we know that the average vector momentum of the electrons must be zero. The uncertainty principle describes “one-sigma” uncertainties, so if you measured the electron momenta for your 100 atoms you expect about 32 of them to have momentum magnitude $|p|$ larger than $\Delta p$, and about 5 of them to have momentum magnitude $|p| > 2\Delta p$. (We talk more often about 68% and 95% “confidence intervals.”) If you picked an electron at random and guessed its momentum magnitude before measuring, $\Delta p$ is a better guess than zero.
This is actually a reasonable conclusion. The next step isn’t the one you take (“Hence, the electron doesn't exist”), but instead a counterintuitive statement about size: cold electrons are big, and can’t be confined. If you want an electron to be confined to a small volume, it has to be participating in some high-momentum interaction.
We often talk about an electron as a “point particle” with “zero size.” We say this because there doesn’t appear to be any other interaction that turns on when you probe an electron at short length scales. Such a short-range interaction would be a sign that the electron has some substructure. (For example, the nuclear interaction changes character at distances closer than about a femtometer, which is related indirectly to the composition of nucleons from quarks.) In the Copenhagen interpretation, and its bastard child the pilot-wave theory, we imagine there is a pointlike “real electron” that we can locate someplace. But as you have discovered, that’s inconsistent with the uncertainty principle. There are lots of situations where the mental model that “cold electrons are big” is helpful.
One physical consequence of “cold electrons are big” happens in white dwarf stars, which are held up by electron degeneracy. As you make the star hotter, the uncertainty on the momenta of the electrons $\Delta p$ gets bigger (because each electron is storing more energy on average). The extra uncertainty in $\Delta p$ allows the volume $(\Delta x)^3$ associated with each electron to shrink. White dwarfs get smaller as you heat them up, because cold electrons are big.
When I say that the mistake in your quoted textbook might be pedagogically useful, what I mean is that the textbook’s argument fits into five sentences and four lines of mathematics. My more-correct answer is substantially longer, and is shortened by plenty of mathematical “pay no attention to the man behind the curtain” which is easily justifiable but would make an intro student uncomfortable.