This is an extended comment rather than a complete answer.
I understand that you want to find the limiting distribution. Below are the results for a maximum $n$ of 10,000 (along with the associated Mathematica code):
(* Generate data and moments *)
nMax = 10000;
\[Mu] = Table[MoebiusMu[i]^2, {i, nMax}];
s[n_] := Zeta[2] Sum[MoebiusMu[i]^2 Binomial[n, i]/2^n, {i, nMax}]
data = Table[{n, s[n]}, {n, 1, nMax}];
moments = Table[{n, Mean[data[[Range[n], 2]] // N],
StandardDeviation[data[[Range[n], 2]] // N],
Skewness[data[[Range[n], 2]] // N],
Kurtosis[data[[Range[n], 2]] // N]}, {n, 2, nMax}];
I've generated the mean, standard deviation, skewness, and kurtosis values for $n=2$ through $n=10000$. If the limiting (or approximating distribution function) is normal, then the skewness should settle towards zero and the kurtosis settle towards 3. Here are the resulting figures:
ListPlot[{data, {{1, 1}, {nMax, 1}}}, Joined -> True,
AspectRatio -> 1/4,
ImageSize -> 1000, Frame -> True,
FrameLabel -> (Style[#, Bold, 18] &) /@ {"n",
"\[Zeta](2)s(n)/\!\(\*SuperscriptBox[\(2\), \(n\)]\)"},
PlotStyle -> Thickness[0.005], ImagePadding -> 50, PlotRange -> All]
plotIt[m_, label_, level_] :=
ListPlot[{moments[[All, {1, m}]], {{2, level}, {nMax, level}}},
Joined -> True, PlotRange -> All, Frame -> True,
FrameLabel -> (Style[#, Bold, 18] &) /@ {"n", label},
AspectRatio -> 1/4, PlotStyle -> Thickness[0.005],
ImagePadding -> 50, PlotRange -> All, ImageSize -> 1000]
plotIt[2, "Mean", 1]
plotIt[3, "Standard deviation", 0.01078]
plotIt[4, "Skewness", 0]
plotIt[5, "Kurtosis", 3]
While the above figures don't rule out a normal distribution (or that a normal distribution might provide a reasonable approximation for the proportion of numbers between any two specified values), that the skewness does not seem to be approaching zero and that the kurtosis is drifting farther away from 3 does not support a normal distribution as the limiting distribution. Maybe a slightly skewed and heavier-tailed distribution might be a better candidate for the limiting distribution.
From other posts I get the impression that you have values up to $n=44,000$. Similar figures as above might also be suggestive with that larger data set.
I will prove that the limit exists and is precisely the Golomb-Dickman constant $\lambda$. First, let me sketch how one can show that the expected value of $\frac{\log(P_1(n))}{\log(n)}$, where $P_1(n)$ denotes the largest prime factor of $n$, is precisely $\lambda\approx0.62433$ via probability theory.
We say that an integer $n\geq1$ is $y$-smooth if every prime divisor of $n$ is $\leq y$. We also say that $n\geq1$ is $y$-powersmooth if every prime power divisor of $n$ is $\leq y$. Let's denote by $S(x,y)$ and $PS(x,y)$ the set of $y$-smooth and $y$-powersmooth numbers $\leq x$ respectively. It is a theorem of Dickman that
$$F_S(\nu):=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^{\nu})|=\rho(1/\nu)$$
where $\rho(t)$ is Dickman's function satisfying the differential equation $t\rho'(t)+\rho(t-1)=0$.
For large $x\gg0$, choose a random $tx\in[0,x]$ where $t\in[0,1]$ (that is, consider the random variable $xT\sim U(0,x)$ where $U(0,x)$ is the discrete uniform distribution that "looks" continuous for large $x$). What is the probability that $\frac{\log(P_1(tx))}{\log(tx)}\leq\nu$? Well
$$tx\in S(x,x^\nu)\iff P_1(tx)\leq x^\nu\iff \frac{\log(P_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}=\frac{\nu}{1+\frac{\log(t)}{\log(x)}}\sim\nu$$
so, for large $x\gg0$, we have $P\left(\frac{\log(P_1(xT))}{\log(xT)}\leq\nu\right)\sim P(xT\in S(x,x^\nu))=\frac{1}{x}|S(x,x^\nu)|\sim F_S(\nu)$. This means that $F_S(\nu)$ is the cumulative distribution function of $\frac{\log(P_1(n))}{\log(n)}$ so we can compute the expected value as
$$\lim_{x\to+\infty}\frac{1}{x}\sum_{n\leq x}\frac{\log(P_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^1\frac{-\rho'(1/\nu)}{\nu}d\nu$$
$$=\int_0^1\rho\left(\frac{1}{\nu}-1\right)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
Now we denote the greatest prime power divisor of $n\geq1$ as $Q_1(n)$. For example,
$$Q_1(24)=Q_1(2^3\times3)=2^3=8\text{ and }Q_1(72)=Q_1(2^3\times3^2)=3^2=9$$
We want to compute the expected value of $\frac{\log(Q_1(n))}{\log(n)}$ is a similar manner as before. The key idea is to note that, for large $x\gg0$ and $tx\in[0,x]$ chosen uniformly, we have
$$tx\in PS(x,x^\nu)\iff Q_1(tx)\leq x^\nu\iff\frac{\log(Q_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}\sim\nu$$
so, again, $P\left(\frac{\log(Q_1(xT))}{\log(xT)}\leq\nu\right)\sim\frac{1}{x}|PS(x,x^\nu)|$. Thus, if we had $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$ (which I'll prove in just a moment), it would immediately follow that
$$\lim_{x\to+\infty}\frac{1}{x}|PS(x,x^\nu)|=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^\nu)|=F_s(\nu)=\rho(1/\nu)$$
$$\Rightarrow\lim_{x\to+\infty}\sum_{n\leq x}\frac{\log(Q_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
as desired. So it suffices to show that $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$.
First observe that every $y$-smooth number is $y$-powersmooth. Thus we have that
$$0\leq|S(x,y)\setminus PS(x,y)|=|S(x,y)|-|PS(x,y)|$$
For every prime $p\leq y$, denote $M_p(x,y)=\{p^{1+\lfloor\log_p y\rfloor}k\leq x\}$ the set of multiples of $p^{1+\lfloor\log_p y\rfloor}$ which are $\leq x$ where $p^{1+\lfloor\log_p y\rfloor}$ is the least power of $p$ that exceeds $y$.
Now observe that, if $n\in S(x,y)\setminus PS(x,y)$, then $n$ is a product of powers of primes $\leq y$ as $n=\prod_{p_i\leq y}p_i^{\epsilon_i}$ where $\epsilon_i$ can be zero and, at the same time, $\exists p_i\leq y$ such that $p_i^{\epsilon}>y\Rightarrow \epsilon_i\geq1+\lfloor\log_{p_i}y\rfloor$. But this means that $n\in S(x,y)\setminus PS(x,y)$ implies that $\exists p\leq y$ prime such that $n\in M_p(x,y)$ so $S(x,y)\setminus PS(x,y)\subseteq\bigcup_{p\leq y}M_p(x,y)$. Thus, we can bound $|S(x,y)|-|PS(x,y)|$ by
$$|S(x,y)|-|PS(x,y)|=|S(x,y)\setminus PS(x,y)|\leq\left|\bigcup_{p\leq y}M_p(x,y)\right|\leq\sum_{p\leq y}|M_p(x,y)|$$
$$=\sum_{p\leq y}\left\lfloor\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\right\rfloor\leq\sum_{p\leq y}\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\leq\sum_{p\leq y}\frac{x}{p^{\log_p(y)}}=\frac{x}{y}\sum_{p\leq y}1=\frac{\pi(y)}{y}x$$
$$\Rightarrow\boxed{\therefore 0\leq\frac{|S(x,y)|-|PS(x,y)|}{x}\leq\frac{\pi(y)}{y}}$$
Finally, note that $\frac{\pi(y)}{y}\sim\frac{1}{\log(y)}\to0$ by the prime number theorem which, in particular, implies $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$. In fact, we only need $\frac{\pi(y)}{y}\to 0$ (i.e. the zero natural density of prime numbers) which is not that hard to prove.$\ \square$
Best Answer
First let me prove that $\dfrac{n}{\sum_{k \le n}\frac{1}{b_k}}$ diverges. It suffices to show the reciprocal $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ converges to $0$. Fix $N\in\mathbb{N}$ and let $p_1,\dots,p_m$ be all the primes less than $N$. Note that then there is a constant $C$ such that for all $n>1$, at most $C\log(n)^m$ integers between $1$ and $n$ have only the primes $p_1,\dots,p_m$ in their prime factorization (for each $p_i$, there are at most $1+\log_{p_i} n$ choices for the power of $p_i$). Since $\frac{\log(n)^m}{n}\to 0$ as $n\to\infty$, the fraction of $k\leq n$ such that $b_k<N$ goes to $0$ as $n\to\infty$. So the contribution of such $k$ to the average $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ goes to $0$, and thus $\limsup \dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}\leq 1/N$. Since $N$ is arbitrary, this implies $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ converges to $0$.
To prove that $\dfrac{n}{\sum_{k \le n}\frac{1}{a_k}}$ converges, we similarly analyze the reciprocal $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$. Let $p_i$ be the $i$th prime in order ($p_1=2,p_2=3,\dots$). For fixed $i$, note that as $n\to\infty$, the proportion of $k$ from $1$ to $n$ such that $a_k=p_i$ approaches the number $$r_i=\frac{1}{p_i}\prod_{j<i}(1-1/p_j)$$ (by the Chinese remainder theorem, this is the proportion of residues mod $\prod_{j\leq i} p_i$ that are divisible by $p_i$ but not by $p_j$ for any $j<i$). So for fixed $m$, when $n$ is sufficiently large, the proportion of $k$ such that $a_k=p_i$ will be arbitrarily close to $r_i$ for all $i\leq m$. This means the contribution of these terms to the average $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ is arbitrarily close to $\sum_{i=1}^m\frac{r_i}{p_i}$. Note moreover that $1-\sum_{i=1}^mr_i=\prod_{i\leq m}(1-1/p_i)$ goes to $0$ as $m\to\infty$ (since $\sum_{i=1}^\infty 1/p_i$ diverges), so the contribution of the remaining terms (those with $a_k=p_i$ for $i>m$) becomes arbitrarily small as $m$ gets large. So fixing a sufficiently large $m$, for all sufficiently large $n$, $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ will be arbitrarily close to $\sum_{i=1}^m\frac{r_i}{p_i}$. It follows that $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ converges to the value $$\sum_{i=1}^\infty\frac{r_i}{p_i}$$ and so $\dfrac{n}{\sum_{k \le n}\frac{1}{a_k}}$ converges to its reciprocal.