Which units are fundamental and which are derived is pretty much a matter of arbitrary convention, not an objective fact about the world.
You might think that the number of fundamental units would be well-defined, but even that's not true.
Take electric charge for example. In the SI system of units (i.e., the "standard" metric system), charge cannot be expressed in terms of mass, length, and time: you need another independent unit. (In the SI, that unit happens to be the Ampere; the unit of charge is defined to be an Ampere-second.) But sometimes people use different systems of units in which charge can be expressed in terms of mass, length, and time. By decreeing that the proportionality constant in Coulomb's Law be equal to 1,
$$
F={q_1q_2\over r^2},
$$
you can define a unit of charge to be (if I've done the algebra right) $(ML^3/T^2)^{1/2}$,
where $M,L,T$ are your units of mass, length, time.
Whether charge is defined in terms of mass, length, time, or whether it's an independent unit, is a matter of convenience, not a fact about the world. People can and do make different choices about it.
Similarly, some people choose to get by with fewer independent units than the three you mention. The most common choice is to decree that length and time have the same units, using the speed of light as a conversion factor. You can even go all the way down to zero independent units, by working in what are often called Planck units.
In summary, you can dial up or down the number of "independent" units in your system at will.
One more example, which seems silly at first but is actually of some historical interest. You can imagine using different, independent units of measure for horizontal and vertical distances. That'd be terribly inconvenient for doing physics, but for many applications it's actually quite convenient. (In aviation, altitudes are often measured in feet, while horizontal distances are measured in miles. In seafaring, leagues are horizontal and fathoms are vertical. Yards are pretty much always used for horizontal distance.)
It sounds absurd to think of using different units for different directions, but in the context of special relativity, using different units for space and time (different directions in spacetime) is sort of similar. If we had evolved in a world in which we were constantly zipping around near light speed, so that special relativity was intuitive to us, we'd probably think that it was obvious that distance and time "really" came in the same units.
Yes, dimensional analysis is enough to reconstruct all the powers of $\hbar$ (and/or $c$ and other constants you may want to set equal to one) in all such formulae. For example, the first form of the Heisenberg equation is schematically $A/t = HA$. It's enough to analyze the powers of one kilogram in the units: $1/t$ has none while $H$ has units of one joule so it contains the first power of a kilogram. Obviously, this has to be cancelled on the right hand side and because $\hbar$ has units of joule-seconds which also has the first power of a kilogram, you must add $1/\hbar$. If you set $\hbar=c=1$ or many more constants equal to one, you would have to add general powers of all these constants. Dimensional analysis would lead to a set of linear equations for the exponents that you could solve.
Best Answer
You are comparing \begin{align} E_h &= 4.359\ 744\ 722\ 2071(85)\times10^{−18}\ \mathrm J \\ \\ \frac{\hbar^2}{m_e a_0^2} &= 4.359\ 744\ 722\ 2232\ 755 \times 10^{-18}\ \mathrm J \end{align}
Your final digits $\cdots 755$ are superfluous, smaller than the uncertainty. Your disagreement is
$$ \frac{(\cdots 2232) - (\cdots 2017)}{85} =1.9\sigma $$
which is a little high. However, you’re using a different expression for $E_h$ than the NIST website. If I use their expression, I get
$$ \alpha^2 m_e c^2 = 4.359\ 744\ 722\ 1987 \times10^{-18}\ \mathrm J $$
which is off by $-1.0\sigma$.
The NIST expression is probably preferable to yours, because $\alpha$ and $m_e$ are independent of each other, while the definition of $a_0$ includes the electron mass. So to the extent that the electron mass is uncertain, that uncertainty multiplies your expression three times (once from $m_e$ and twice from $a_0^2$), while it enters the NIST expression only once.
If you really want to understand what’s happening here, you probably want to read the ugly details of CODATA’s enormous least-squares fit of all the data on physical constants; see the links in this other constants-related answer.
Note that you can get these results using regular double-precision arithmetic, rather than using a multiple-precision library. (Multiple-precision libraries have their uses; they also have their quirks.) The problem from an arithmetic standpoint is that, in a world where the Hartree energy is a constant of nature, the measured values of $\alpha$ and $m_e$ are inconsistent with each other at the one-ish-sigma level. Dealing with those kinds of inconsistencies is the purpose of CODATA's least-squares fit to the entirety of the constants literature.