I wasn't planning on answering here but since someone mentioned my paper in the long comment thread above maybe I should anyway.
When I'm solving problems by hand one of the sets of patterns I frequently use involve uniqueness: something can't happen because it would lead to a puzzle with more than one solution, but a well-posed Sudoku puzzle has only one solution, so it's safe to assume that whatever it is doesn't happen. For instance, it's not possible to have four initially-empty cells in a rectangle of the same two rows, the same two columns, and the same two 3x3 squares, and also for these four cells to all contain one of the same two values, because then the two values they contain could be placed in two different ways and the rest of the puzzle wouldn't notice the change. So if there's only one way to prevent a rectangle like this from forming then the solution has to use that one way.
I have the sense (though I could be wrong) that this sort of inference does not imply a unique rational solution. But of course a puzzle could have a unique rational solution anyway — my computer solver knows these rules too, but uses them less frequently than I do by hand.
ETA: It does indeed seem to be true that these rules don't imply unique rational solutions. With them, my solver easily solves the following puzzle, which has a unique integer solution but does not have a unique fractional solution. Without them, my solver can still solve the same puzzle, but only by using rules that are (in my experience) much more difficult to apply by hand.
-----------------------------------
| 3 7 8 | 6 4 5 | 1 2 9 |
| | | |
| 6 9 . | . 7 . | 4 8 5 |
| | | |
| 4 . 5 | 9 . 8 | 3 7 6 |
|-----------------------------------|
| 7 . 9 | 5 3 . | 8 6 4 |
| | | |
| 8 4 . | 7 . . | 5 9 3 |
| | | |
| 5 3 6 | 8 9 4 | 2 1 7 |
|-----------------------------------|
| 1 6 3 | 2 5 7 | 9 4 8 |
| | | |
| 9 8 4 | . . . | 7 5 2 |
| | | |
| 2 5 7 | 4 8 9 | 6 3 1 |
-----------------------------------
Or if you want something that looks like the puzzles that get published, here's another one that uses the same mechanism:
-----------------------------------
| 1 . . | 8 . 7 | 5 . . |
| | | |
| . 5 . | 6 . . | 9 7 . |
| | | |
| . 6 . | . 4 . | . . . |
|-----------------------------------|
| . . 2 | . . . | . 6 . |
| | | |
| 6 . . | 4 . 3 | . . . |
| | | |
| . 4 . | . . . | 1 . . |
|-----------------------------------|
| . . . | . 1 . | . 5 . |
| | | |
| . 1 3 | . . 6 | . 9 . |
| | | |
| . . 7 | 5 . 4 | . . 1 |
-----------------------------------
For the asymptotic case: Let $t_1=n \log n - Cn$ and $t_2 = n \log n + Cn$, where $C$ is slowly tending to infinity. It is a classic result that as $C$ tends to infinity the probability all coupons are collected at time $t_1$ tends to $0$, and the probability all coupons are collected at time $t_2$ tends to $1$.
So the number being asked for in the original post is with high probability sandwiched between the most collected coupon at time $t_1$ and the most collected coupon at time $t_2$. This has been studied a fair amount, though often in language of load-balancing and balls-in-bins ("if you toss $m$ balls in $n$ bins, what is the typical number of balls in the bin with the most balls"). In particular, there's a nice analysis of Raab and Steger that uses the second moment method to give a tight concentration on this. It follows from Theorem $1$ in their paper that at time $c n \log n$, the most common coupon has almost surely been collected $(d_c+o(1)) \log n$ times, where $d_c$ is the larger real root of
$$1+x(\log c - \log x +1)-c=0$$
In our case, we have $d_1=e$, so the most common song will have been heard $(e+o(1))\log n = (e+o(1))H_n$ times.
In response to the comment: The first moment part of the calculation also comes with come concentration estimates for free. At time $t_1$, the probability that we've seen some coupon at least $C \log n$ times can be bounded by
\begin{eqnarray*}
& & \sum_{j=C \log n}^{t_1} n \binom{t_1}{j} n^{-j} (1-\frac{1}{n})^{t_1-j} \\
&\leq& \sum_{j=C \log n}^{t_1} n \left(\frac{e t_1}{ nj}\right)^j (1-\frac{1}{n})^{n \log n + o(\log n)} (1-\frac{1}{n})^{-j} \\
&=& n^{o(1)} \sum_{j=C \log n}^{t_1} \left(\frac{e \log n(1+o(1))}{j}\right)^j \\
&\leq& n^{o(1)} \sum_{j=C \log n}^{t_1} \left(\frac{e}{C}+o(1)\right)^j
\end{eqnarray*}
Which should decay rapidly enough in $C$ to guarantee the mean lies close to the concentration.
The linked bounds for the probability of the completion time lying outside $[t_1, t_2]$ also decay quite rapidly (e.g. cardinal's answer, and David's subsequent comments on it).
Best Answer
Each bottle of wine corresponds to the set of rats who tasted it. Let $\mathcal{F}$ be the family of the resulting sets. If bottles corresponding to sets $A$ and $B$ are poisoned then $A \cup B$ is the set of dead rats. Therefore we can identify the poisoned bottles as long as for all $A,B,C,D \in \mathcal{F}$ such that $A \cup B = C \cup D$ we have $\{ A, B \} = \{ C, D \}$. Families with this property are called (strongly) union-free and the maximum possible size $f(n) $ of a union free family $\mathcal{F} \subset 2^{ [n] } $ has been studied in extremal combinatorics. In the question context, $f(n)$ is the maximum number of bottles of wine which can be tested by $n$ rats.
In the paper "Union-free Hypergraphs and Probability Theory" Frankl and Furedi show that $$2^{(n-3)/4} \leq f(n) \leq 2^{(n+1)/2}.$$ The proof of the lower bound is algebraic, constructive, and, I think, very elegant. In particular, one can find $2$ poisoned bottles out of $1000$ with $43$ rats.