I will prove that the limit exists and is precisely the Golomb-Dickman constant $\lambda$. First, let me sketch how one can show that the expected value of $\frac{\log(P_1(n))}{\log(n)}$, where $P_1(n)$ denotes the largest prime factor of $n$, is precisely $\lambda\approx0.62433$ via probability theory.
We say that an integer $n\geq1$ is $y$-smooth if every prime divisor of $n$ is $\leq y$. We also say that $n\geq1$ is $y$-powersmooth if every prime power divisor of $n$ is $\leq y$. Let's denote by $S(x,y)$ and $PS(x,y)$ the set of $y$-smooth and $y$-powersmooth numbers $\leq x$ respectively. It is a theorem of Dickman that
$$F_S(\nu):=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^{\nu})|=\rho(1/\nu)$$
where $\rho(t)$ is Dickman's function satisfying the differential equation $t\rho'(t)+\rho(t-1)=0$.
For large $x\gg0$, choose a random $tx\in[0,x]$ where $t\in[0,1]$ (that is, consider the random variable $xT\sim U(0,x)$ where $U(0,x)$ is the discrete uniform distribution that "looks" continuous for large $x$). What is the probability that $\frac{\log(P_1(tx))}{\log(tx)}\leq\nu$? Well
$$tx\in S(x,x^\nu)\iff P_1(tx)\leq x^\nu\iff \frac{\log(P_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}=\frac{\nu}{1+\frac{\log(t)}{\log(x)}}\sim\nu$$
so, for large $x\gg0$, we have $P\left(\frac{\log(P_1(xT))}{\log(xT)}\leq\nu\right)\sim P(xT\in S(x,x^\nu))=\frac{1}{x}|S(x,x^\nu)|\sim F_S(\nu)$. This means that $F_S(\nu)$ is the cumulative distribution function of $\frac{\log(P_1(n))}{\log(n)}$ so we can compute the expected value as
$$\lim_{x\to+\infty}\frac{1}{x}\sum_{n\leq x}\frac{\log(P_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^1\frac{-\rho'(1/\nu)}{\nu}d\nu$$
$$=\int_0^1\rho\left(\frac{1}{\nu}-1\right)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
Now we denote the greatest prime power divisor of $n\geq1$ as $Q_1(n)$. For example,
$$Q_1(24)=Q_1(2^3\times3)=2^3=8\text{ and }Q_1(72)=Q_1(2^3\times3^2)=3^2=9$$
We want to compute the expected value of $\frac{\log(Q_1(n))}{\log(n)}$ is a similar manner as before. The key idea is to note that, for large $x\gg0$ and $tx\in[0,x]$ chosen uniformly, we have
$$tx\in PS(x,x^\nu)\iff Q_1(tx)\leq x^\nu\iff\frac{\log(Q_1(tx))}{\log(tx)}\leq\frac{\nu\log(x)}{\log(tx)}\sim\nu$$
so, again, $P\left(\frac{\log(Q_1(xT))}{\log(xT)}\leq\nu\right)\sim\frac{1}{x}|PS(x,x^\nu)|$. Thus, if we had $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$ (which I'll prove in just a moment), it would immediately follow that
$$\lim_{x\to+\infty}\frac{1}{x}|PS(x,x^\nu)|=\lim_{x\to+\infty}\frac{1}{x}|S(x,x^\nu)|=F_s(\nu)=\rho(1/\nu)$$
$$\Rightarrow\lim_{x\to+\infty}\sum_{n\leq x}\frac{\log(Q_1(n))}{\log(n)}=\int_0^1\nu F_S'(\nu)d\nu=\int_0^\infty\frac{\rho(t)}{(1+t)^2}dt=\lambda$$
as desired. So it suffices to show that $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$.
First observe that every $y$-smooth number is $y$-powersmooth. Thus we have that
$$0\leq|S(x,y)\setminus PS(x,y)|=|S(x,y)|-|PS(x,y)|$$
For every prime $p\leq y$, denote $M_p(x,y)=\{p^{1+\lfloor\log_p y\rfloor}k\leq x\}$ the set of multiples of $p^{1+\lfloor\log_p y\rfloor}$ which are $\leq x$ where $p^{1+\lfloor\log_p y\rfloor}$ is the least power of $p$ that exceeds $y$.
Now observe that, if $n\in S(x,y)\setminus PS(x,y)$, then $n$ is a product of powers of primes $\leq y$ as $n=\prod_{p_i\leq y}p_i^{\epsilon_i}$ where $\epsilon_i$ can be zero and, at the same time, $\exists p_i\leq y$ such that $p_i^{\epsilon}>y\Rightarrow \epsilon_i\geq1+\lfloor\log_{p_i}y\rfloor$. But this means that $n\in S(x,y)\setminus PS(x,y)$ implies that $\exists p\leq y$ prime such that $n\in M_p(x,y)$ so $S(x,y)\setminus PS(x,y)\subseteq\bigcup_{p\leq y}M_p(x,y)$. Thus, we can bound $|S(x,y)|-|PS(x,y)|$ by
$$|S(x,y)|-|PS(x,y)|=|S(x,y)\setminus PS(x,y)|\leq\left|\bigcup_{p\leq y}M_p(x,y)\right|\leq\sum_{p\leq y}|M_p(x,y)|$$
$$=\sum_{p\leq y}\left\lfloor\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\right\rfloor\leq\sum_{p\leq y}\frac{x}{p^{1+\lfloor\log_p y\rfloor}}\leq\sum_{p\leq y}\frac{x}{p^{\log_p(y)}}=\frac{x}{y}\sum_{p\leq y}1=\frac{\pi(y)}{y}x$$
$$\Rightarrow\boxed{\therefore 0\leq\frac{|S(x,y)|-|PS(x,y)|}{x}\leq\frac{\pi(y)}{y}}$$
Finally, note that $\frac{\pi(y)}{y}\sim\frac{1}{\log(y)}\to0$ by the prime number theorem which, in particular, implies $\lim\limits_{x\to+\infty}\frac{|S(x,x^\nu)|-|PS(x,x^\nu)|}{x}=0$. In fact, we only need $\frac{\pi(y)}{y}\to 0$ (i.e. the zero natural density of prime numbers) which is not that hard to prove.$\ \square$
We can write any integer in the form $m^h r$, where m is square-free, and $r$ is h-free and coprime to $m$.
For example, $ \ 10584 = 2^3\cdot 3^3 \cdot 5\cdot 7^2 = 6^3 \cdot 245$
This way, we can generate the values of k, already knowing the values of m and h. Also, note that $H_k$ is equal to $h$, and $b_k$ and $B_k$ are the smallest and largest prime factors of $m$ respectively.
Using this knowledge,
$$\sum_{k = 2}^n{\frac{b_k}{B_k}}\tag{1}$$
can be re-written as
$$\sum_{h\ge 2}{\sum_{r\le n}{q_h(r)\sum_{m^hr\le n, \ m \ge 2}{\left(q_2(m)\cdot[gcd(m, r)= 1]\cdot\frac{\text{smallest prime factor of m}}{\text{largest prime factor of m}}\right)}}},\tag{2}\\ where \ \ q_h(r) = \begin{cases}
1: & \text{if $r$ is h-free} \\
0: & \text{otherwise}
\end{cases}.$$
Now it just remains to find some asymptotic for
$$\sum_{m = 2}^x{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{\text{smallest prime factor of m}}{\text{largest prime factor of m}}\right)}\tag{3}$$
in terms of x.
We can re-use some of the results and techniques from this source - https://arxiv.org/pdf/1907.09129.pdf
By writing the smallest and largest prime factors of $m$ as $p(m)$ and $P(m)$ respectively, we have
$$\sum_{m = 2}^x{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{p(m)}{P(m)}\right)} \ = \ \sum_i{\sum_{2 \le m \le x \\ \omega(m) = i}{\left(q_2(m)\cdot[gcd(m, r) = 1]\cdot\frac{p(m)}{P(m)}\right)}}\\$$
$$=\sum_{p \le x}{\left(q_2(p)\cdot [gcd(p, r) = 1] \cdot \frac{p}{p}\right)} \ + \ O\left(\sum_{i \ge 2}{\sum_{2 \le m \le x \\ \omega(m) = i}{\frac{p(m)}{P(m)}}}\right)\\ = \ \sum_{p \le x \\ p \not\mid r}{1} \ + \ O\left(\frac{x}{\log^2(x)}\right)\tag{4}$$
where the bounding of the RHS came from the source above.
Clearly, the main term in the limit comes from the summation on the left, thus we need only plug this back into $(2)$, with $x = (\frac{n}{r})^{\frac{1}{h}}$.
This gives,
$$\sum_{h\ge 2}{\sum_{r\le n}{q_h(r)\sum_{p \le (\frac{n}{r})^{\frac{1}{h}} \\ p \not\mid r}{1}}} \ = \ \sum_{h\ge 2}{ \ \sum_{ \ p^h \le n}{ \ \sum_{r \le \frac{n}{p^h} \\ p \not\mid r}{q_h(r)}}}\tag{5}$$
Now, using the fact that $q_h(r) = \sum_{d^h \mid r}{\mu(d)}$, one can find that
$$\sum_{r \le y \\ p \not\mid r}{q_h(r)} \ \sim \ \frac{1}{\zeta(h)} \cdot \frac{p^h - p^{h-1}}{p^h - 1} \cdot y \tag{6}$$
And substituting into $(5)$ leads to our final answer
$$\sum_{h\ge 2}{ \ \sum_{ \ p^h \le n}{\frac{1}{\zeta(h)} \cdot \frac{p^h - p^{h-1}}{p^h - 1} \cdot \frac{n}{p^h}}}\tag{7}$$
The $n$ cancels with the $\frac{1}{n}$ in the limit, hence the final answer is
$$c = \sum_{h\ge 2}{\frac{1}{\zeta(h)} \sum_{p}{\frac{p - 1}{p(p^h - 1)}}}. \tag{8}$$
Edit:
I got a value of $0.36604$ as a close upper bound to this sum.
Best Answer
The limit exists, and equals the constant $1.70521...$ by which you've lower bounded it. To show this, we first need a claim which lets us evaluate the sum of $f(k)$.
Claim. Let $s_b(n)$ be the sum of the digits of $n$ when written in base $b$, and let $$g_b(n)=\frac{n-s_b(n)}{b-1}=\sum_{k=1}^\infty \left\lfloor\frac{n}{b^k}\right\rfloor.$$ Then, if $\mu$ is the Möbius function, $$\sum_{k=1}^nf(k)=\sum_{q=2}^n(-\mu(q))g_q(n).$$
Proof. For squarefree $q$, define $\nu_q(n)$ to be the largest power of $q$ that divides $n$. Then, since $$\min(\nu_{p_1}(k),\dots,\nu_{p_s}(k))=\nu_{p_1\cdots p_s}(k),$$ the inclusion-exclusion principle gives $$f(k)=\max_{p\text{ prime}}(\nu_p(k))=\sum_{\substack{q\text{ squarefree}\\q\neq 1}}(-\mu(q))\nu_q(k).$$ The sum needs only run up through $k$, since all other terms are $0$ (that is, only finitely many primes need to be considered). Now, the observation that $$g_q(n)=\sum_{k=1}^\infty \left\lfloor\frac{n}{q^k}\right\rfloor=\sum_{k=1}^\infty \sum_{\substack{1\leq m\leq n\\q^k\mid m}}1=\sum_{m=1}^n\nu_q(m)$$ gives the desired result.
We will also need the following technical lemma.
Lemma. There exists some absolute constant $c$ for which, if $n$ and $k$ are positive integers with $n$ sufficiently large and $k\leq n^{0.1}-1$, then $$\left|\sum_{\frac{n}{k+1}<q\leq\frac nk}\frac{\mu(q)s_q(n)}{q-1}\right|\leq \frac{cn}{k(k+1)(\log n)^{1/4}}.$$
Proof. The main idea is that $s_q(n)/(q-1)$ changes relatively slowly in $q$ for $q$ somewhat close to $n$. With this, we'll use the following result of Matomäki and Terävainen (the strength of their result is the fact that $0.55$ is small, while we'll only need that it is some fixed constant less than $1$, but this is the first reference I could find):
This allows us to save a log factor on sums of the Möbius function multiplied by some constant. If we split up our sum carefully, we can save this log factor overall.
Note that, since $k\leq \sqrt n-1$, all $q$ in the sum are between $\sqrt n$ and $n$, so $n$ has length $2$ when written in base $q$. The first "digit" is $k$, and the second is $n-kq$. Let $L=\frac{n}{k(k+1)}$, and fix $N=\lfloor n^{0.2}\rfloor $. Now, write $$\sum_{\frac{n}{k+1}<q\leq\frac nk}\frac{\mu(q)s_q(n)}{q-1}=\sum_{t=1}^N\sum_{\frac{n}{k+1}+\frac{(t-1)L}N<q\leq \frac n{k+1}+\frac{tL}N}\frac{\mu(q)(k+n-kq)}{q-1}.$$ We'll bound the inside sum for fixed $t$. Write $q_1=q_1(t)=\frac{n}{k+1}+\frac{(t-1)L}{N}$ and $q_2=q_2(t)=\frac n{k+1}+\frac{tL}{N}$. Note that, for $q_1<q\leq q_2$, \begin{align*} \left|\frac{k+n-kq_2}{q_2-1}-\frac{k+n-kq}{q-1}\right|&\leq \left|\frac{n}{q_2-1}-\frac n{q-1}\right|\\ &\leq \frac{n(q_2-q_1)}{(q_2-1)(q_1-1)}\leq \frac{n(L/N)}{\left(\frac n{k+1}-1\right)^2}\leq \frac{4(k+1)^2L}{Nn} \end{align*} (the constant $4$ is just for safety; the bound nearly holds if the $4$ is removed, but the $-1$ terms in the denominator mess it up slightly). This means \begin{align*} \left|\left(\sum_{q_1<q\leq q_2}\frac{\mu(q)s_q(n)}{q-1}\right)-\left(\left(\frac{n}{q_2-1}-k\right)\sum_{q_1<q\leq q_2}\mu(q)\right)\right| &\leq \sum_{q_1<q\leq q_2}\frac{|\mu(q)|4(k+1)^2L}{Nn}\\ &\leq \frac{4(k+1)^2L^2}{N^2n}=\frac{4n}{N^2k^2}.\tag{1} \end{align*} Now, since $q_2-q_1=L/N\geq n^{0.6}$ and $n^{0.9}\leq q_1<n$, we can apply the result of Matomäki and Terävainen with $\theta=0.6$ and $\epsilon=1/12$ to get that there exists some constant $c_0$ for which $$\left|\sum_{q_1<q\leq q_2}\mu(q)\right|\leq \frac{c_0(L/N)}{(\log q_1)^{1/4}}.$$ From this, since $q_1\geq n^{0.9}$, there is some constant $c_1$ for which (1) gives $$\left|\sum_{q_1<q\leq q_2}\frac{\mu(q)s_q(n)}{q-1}\right|\leq \frac{4n}{N^2k^2}+\frac{c_1n}{k(k+1)N(\log n)^{1/4}}.$$ Summing over all $N$ values of $t$ gives $$\left|\sum_{\frac{n}{k+1}<q\leq \frac nk}\frac{\mu(q)s_q(n)}{q-1}\right|\leq \frac{4n}{Nk^2}+\frac{c_1n}{k(k+1)(\log n)^{1/4}}.$$ The fact that $4/N\ll (\log n)^{-1/4}$ finishes the proof. $\square$
We now use our claim and lemma to isolate the "essential part" of the sum of $f(k)$. By the definition of $g_q$, $$\left|\left(\sum_{q=2}^n(-\mu(q))g_q(n)\right)-n\left(\sum_{q=2}^n\frac{-\mu(q)}{q-1}\right)\right|\leq \left|\sum_{q=2}^n\frac{\mu(q)s_q(n)}{q-1}\right|.$$ We bound the right side using our lemma. First, since $s_q(n)$ is the sum of at most $\log_qn+1$ digits in base $q$, we have $s_q(n)/(q-1)\leq \log_qn+1$, and so, if $M=\lceil\log^2 n\rceil$, $$\left|\sum_{2\leq q\leq \frac nM}\frac{\mu(q)s_q(n)}{q-1}\right|\leq \sum_{2\leq q\leq \frac nM}\left(\log_qn+1\right)\leq \frac{n(\log_2 n+1)}{M}=O\left(\frac n{\log n}\right).$$ The remaining sum can be written as $$\sum_{k=1}^{M-1}\sum_{\frac n{k+1}<q\leq \frac nk}\frac{\mu(q)s_q(n)}{q-1}\leq \sum_{k=1}^{M-1}\left|\sum_{\frac n{k+1}<q\leq \frac nk}\frac{\mu(q)s_q(n)}{q-1}\right|.$$ Each $1\leq k\leq M-1$ satisfies the conditions of the lemma, so there exists some constant $c$ for which $$\left|\sum_{\frac nM<q\leq n}\frac{\mu(q)s_q(n)}{q-1}\right|\leq \sum_{k=1}^{M-1}\frac{cn}{k(k+1)(\log n)^{1/4}}<\frac{cn}{(\log n)^{1/4}}=O\left(\frac n{(\log n)^{1/4}}\right).$$ So, in total, we have $$\boxed{\frac1n\sum_{k=1}^nf(k)=\left(\sum_{q=2}^n\frac{-\mu(q)}{q-1}\right)+O\left(\log^{-1/4}n\right)}.$$
We now finish by investigating the sum $$r_n=\sum_{q=2}^n \frac{(-\mu(q))}{q-1}.$$ Define for $k\geq 1$ the sum $t_{n,k}=\sum_{q=2}^n\frac{(-\mu(q))}{q^k}$; then $$r_n=\sum_{k=1}^\infty t_{n,k}.$$ It is known that $$1-t_{n,k}=\sum_{q=1}^n\frac{\mu(q)}{q^k}$$ converges as $n\to\infty$ (even for $k=1$) to $1/\zeta(k)$ (or $0$ if $k=1$), with $$\left|t_{n,k}-\left(1-\frac1{\zeta(k)}\right)\right|\leq \sum_{q=n+1}^\infty \frac{1}{q^k}\leq \sum_{q=n+1}^\infty \frac{1}{(q-1)^{k-1}}-\frac1{q^{k-1}}=\frac1{n^{k-1}}$$ for $k\geq 2$, and so $$\left|r_n-\left(t_{n,1}+\sum_{k=2}^\infty\left(1-\frac1{\zeta(k)}\right)\right)\right|\leq \sum_{k=2}^\infty \frac1{n^{k-1}}=\frac1{n-1}.$$ So, since $t_{n,1}\to 1$ as $n\to\infty$, this shows that $$r_n\to 1+\sum_{k=2}^\infty \left(1-\frac1{\zeta(k)}\right)\approx 1.70521.$$
Letting $\mu_0$ be this sum, $\mu_0$ is the limit of $r_n$, so we conclude that $$\frac1n\sum_{k=1}^n f(k)\to \mu_0.$$