I will prove that the limit exists and is precisely the Golomb-Dickman constant $\lambda$. First, let me sketch how one can show that the expected value of $\frac{\log(P_1(n))}{\log(n)}$, where $P_1(n)$ denotes the largest prime factor of $n$, is precisely $\lambda\approx0.62433$ via probability theory.
We say that an integer $n\geq1$ is $y$-smooth if every prime divisor of $n$ is $\leq y$. We also say that $n\geq1$ is $y$-powersmooth if every prime power divisor of $n$ is $\leq y$. Let's denote by $S(x,y)$ and $PS(x,y)$ the set of $y$-smooth and $y$-powersmooth numbers $\leq x$ respectively. It is a theorem of Dickman that
$$F_S(\nu):=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^{\nu})|=\rho(1/\nu)$$
where $\rho(t)$ is Dickman's function satisfying the differential equation $t\rho'(t)+\rho(t-1)=0$.
For large $x\gg0$, choose a random $tx\in[0,x]$ where $t\in[0,1]$ (that is, consider the random variable $xT\sim U(0,x)$ where $U(0,x)$ is the discrete uniform distribution that "looks" continuous for large $x$). What is the probability that $\frac{\log(P_1(tx))}{\log(tx)}\leq\nu$? Well
$$tx\in S(x,x^\nu)\iff P_1(tx)\leq x^\nu\iff \frac{\log(P_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}=\frac{\nu}{1+\frac{\log(t)}{\log(x)}}\sim\nu$$
so, for large $x\gg0$, we have $P\left(\frac{\log(P_1(xT))}{\log(xT)}\leq\nu\right)\sim P(xT\in S(x,x^\nu))=\frac{1}{x}|S(x,x^\nu)|\sim F_S(\nu)$. This means that $F_S(\nu)$ is the cumulative distribution function of $\frac{\log(P_1(n))}{\log(n)}$ so we can compute the expected value as
$$\lim_{x\to+\infty}\frac{1}{x}\sum_{n\leq x}\frac{\log(P_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^1\frac{-\rho'(1/\nu)}{\nu}d\nu$$
$$=\int_0^1\rho\left(\frac{1}{\nu}-1\right)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
Now we denote the greatest prime power divisor of $n\geq1$ as $Q_1(n)$. For example,
$$Q_1(24)=Q_1(2^3\times3)=2^3=8\text{ and }Q_1(72)=Q_1(2^3\times3^2)=3^2=9$$
We want to compute the expected value of $\frac{\log(Q_1(n))}{\log(n)}$ is a similar manner as before. The key idea is to note that, for large $x\gg0$ and $tx\in[0,x]$ chosen uniformly, we have
$$tx\in PS(x,x^\nu)\iff Q_1(tx)\leq x^\nu\iff\frac{\log(Q_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}\sim\nu$$
so, again, $P\left(\frac{\log(Q_1(xT))}{\log(xT)}\leq\nu\right)\sim\frac{1}{x}|PS(x,x^\nu)|$. Thus, if we had $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$ (which I'll prove in just a moment), it would immediately follow that
$$\lim_{x\to+\infty}\frac{1}{x}|PS(x,x^\nu)|=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^\nu)|=F_s(\nu)=\rho(1/\nu)$$
$$\Rightarrow\lim_{x\to+\infty}\sum_{n\leq x}\frac{\log(Q_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
as desired. So it suffices to show that $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$.
First observe that every $y$-smooth number is $y$-powersmooth. Thus we have that
$$0\leq|S(x,y)\setminus PS(x,y)|=|S(x,y)|-|PS(x,y)|$$
For every prime $p\leq y$, denote $M_p(x,y)=\{p^{1+\lfloor\log_p y\rfloor}k\leq x\}$ the set of multiples of $p^{1+\lfloor\log_p y\rfloor}$ which are $\leq x$ where $p^{1+\lfloor\log_p y\rfloor}$ is the least power of $p$ that exceeds $y$.
Now observe that, if $n\in S(x,y)\setminus PS(x,y)$, then $n$ is a product of powers of primes $\leq y$ as $n=\prod_{p_i\leq y}p_i^{\epsilon_i}$ where $\epsilon_i$ can be zero and, at the same time, $\exists p_i\leq y$ such that $p_i^{\epsilon}>y\Rightarrow \epsilon_i\geq1+\lfloor\log_{p_i}y\rfloor$. But this means that $n\in S(x,y)\setminus PS(x,y)$ implies that $\exists p\leq y$ prime such that $n\in M_p(x,y)$ so $S(x,y)\setminus PS(x,y)\subseteq\bigcup_{p\leq y}M_p(x,y)$. Thus, we can bound $|S(x,y)|-|PS(x,y)|$ by
$$|S(x,y)|-|PS(x,y)|=|S(x,y)\setminus PS(x,y)|\leq\left|\bigcup_{p\leq y}M_p(x,y)\right|\leq\sum_{p\leq y}|M_p(x,y)|$$
$$=\sum_{p\leq y}\left\lfloor\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\right\rfloor\leq\sum_{p\leq y}\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\leq\sum_{p\leq y}\frac{x}{p^{\log_p(y)}}=\frac{x}{y}\sum_{p\leq y}1=\frac{\pi(y)}{y}x$$
$$\Rightarrow\boxed{\therefore 0\leq\frac{|S(x,y)|-|PS(x,y)|}{x}\leq\frac{\pi(y)}{y}}$$
Finally, note that $\frac{\pi(y)}{y}\sim\frac{1}{\log(y)}\to0$ by the prime number theorem which, in particular, implies $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$. In fact, we only need $\frac{\pi(y)}{y}\to 0$ (i.e. the zero natural density of prime numbers) which is not that hard to prove.$\ \square$
We can write any integer in the form $m^h r$, where m is square-free, and $r$ is h-free and coprime to $m$.
For example, $ \ 10584 = 2^3\cdot 3^3 \cdot 5\cdot 7^2 = 6^3 \cdot 245$
This way, we can generate the values of k, already knowing the values of m and h. Also, note that $H_k$ is equal to $h$, and $b_k$ and $B_k$ are the smallest and largest prime factors of $m$ respectively.
Using this knowledge,
$$\sum_{k = 2}^n{\frac{b_k}{B_k}}\tag{1}$$
can be re-written as
$$\sum_{h\ge 2}{\sum_{r\le n}{q_h(r)\sum_{m^hr\le n, \ m \ge 2}{\left(q_2(m)\cdot[gcd(m, r)= 1]\cdot\frac{\text{smallest prime factor of m}}{\text{largest prime factor of m}}\right)}}},\tag{2}\\ where \ \ q_h(r) = \begin{cases}
1: & \text{if $r$ is h-free} \\
0: & \text{otherwise}
\end{cases}.$$
Now it just remains to find some asymptotic for
$$\sum_{m = 2}^x{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{\text{smallest prime factor of m}}{\text{largest prime factor of m}}\right)}\tag{3}$$
in terms of x.
We can re-use some of the results and techniques from this source - https://arxiv.org/pdf/1907.09129.pdf
By writing the smallest and largest prime factors of $m$ as $p(m)$ and $P(m)$ respectively, we have
$$\sum_{m = 2}^x{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{p(m)}{P(m)}\right)} \ = \ \sum_i{\sum_{2 \le m \le x \\ \omega(m) = i}{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{p(m)}{P(m)}\right)}}\\$$
$$=\sum_{p \le x}{\left(q_2(p)\cdot [gcd(p, r) = 1] \cdot \frac{p}{p}\right)} \ + \ O\left(\sum_{i \ge 2}{\sum_{2 \le m \le x \\ \omega(m) = i}{\frac{p(m)}{P(m)}}}\right)\\ = \ \sum_{p \le x \\ p \not\mid r}{1} \ + \ O\left(\frac{x}{\log^2(x)}\right)\tag{4}$$
where the bounding of the RHS came from the source above.
Clearly, the main term in the limit comes from the summation on the left, thus we need only plug this back into $(2)$, with $x = (\frac{n}{r})^{\frac{1}{h}}$.
This gives,
$$\sum_{h\ge 2}{\sum_{r\le n}{q_h(r)\sum_{p \le (\frac{n}{r})^{\frac{1}{h}} \\ p \not\mid r}{1}}} \ = \ \sum_{h\ge 2}{ \ \sum_{ \ p^h \le n}{ \ \sum_{r \le \frac{n}{p^h} \\ p \not\mid r}{q_h(r)}}}\tag{5}$$
Now, using the fact that $q_h(r) = \sum_{d^h \mid r}{\mu(d)}$, one can find that
$$\sum_{r \le y \\ p \not\mid r}{q_h(r)} \ \sim \ \frac{1}{\zeta(h)} \cdot \frac{p^h - p^{h-1}}{p^h - 1} \cdot y \tag{6}$$
And substituting into $(5)$ leads to our final answer
$$\sum_{h\ge 2}{ \ \sum_{ \ p^h \le n}{\frac{1}{\zeta(h)} \cdot \frac{p^h - p^{h-1}}{p^h - 1} \cdot \frac{n}{p^h}}}\tag{7}$$
The $n$ cancels with the $\frac{1}{n}$ in the limit, hence the final answer is
$$c = \sum_{h\ge 2}{\frac{1}{\zeta(h)} \sum_{p}{\frac{p - 1}{p(p^h - 1)}}}. \tag{8}$$
Edit:
I got a value of $0.36604$ as a close upper bound to this sum.
Best Answer
Let $h$ be a multiplicative function defined by $h(p^k)=k$ for all prime powers $p^k$. Then you are trying to calculate the mean value of $h$, and we have that:
This can be done using the general method in this answer: On the mean value of a multiplicative function: Prove that $\sum\limits_{n\leq x} \frac{n}{\phi(n)} =O(x) $
The idea is that since $h$ is "very close" to $1$, $f=\mu*h$ will be very close to zero, where $\mu$ is the Mobius mu function, and $*$ represents Dirichlet convolution. In particular, computing the Mobius inversion, we have $f(p)=0$, $f(p^k)=1$ for $k\geq 2$, and $(1*f)=h$.
From the method in the above answer, we have
Thus it follows that
$$\sum_{n\leq x} h(n) = x\sum_{d}\frac{f(d)}{d}+O(\sqrt{x}),$$ and hence the mean value is $$\sum_{d}\frac{f(d)}{d} = \prod_{p}\left(1+\frac{1}{p(p-1)}\right)$$
where the final form is an Euler-product. The Euler product can be rearranged as $$\prod_{p}\left(1+\frac{1}{p(p-1)}\right)=\prod_{p}\left(\frac{p^{2}-p+1}{p(p-1)}\right)$$ $$=\prod_{p}\left(\frac{p^{2}-p+1}{p(p-1)}\right)=\prod_{p}\left(\frac{p^{6}-1}{p(p^{2}-1)(p^{3}-1)}\right)=\frac{\zeta(3)\zeta(2)}{\zeta(6)}.$$
See also this list of similar questions using the same technique: