The big/little O/Ω/Θ notation is not defined or, indeed, properly definable in terms of limits. In particular, it's possible e.g. that $f(n) = \Theta(g(n))$ even though $f(n) \mathbin/ g(n)$ does not converge to a limit.
(For a simple counterexample, pick any function $g(n) > 0$ and let $f(n) = (2 + (-1)^n)\, g(n)$. In other words, $f(n) = g(n)$ for odd $n$ and $f(n) = 3g(n)$ for even $n$.)
That said, when the ratio $f(n) \mathbin/ g(n)$ does converge (or tend to infinity), we can determine the asymptotic relationship between $f(n)$ and $g(n)$ from the value of the limit. In particular, as noted in the handy Asymptotics Cheat Sheet by Tom Leighton and Ronitt Rubinfeld, as linked by bigworld12 in the comments above,
The definitions of the various asymptotic notations are closely related to the definition of a limit. As a result, $\lim_{n→∞} f(n) \mathbin/ g(n)$ reveals a lot about the asymptotic relationship between $f$ and $g$, provided the limit exists. The table below translates facts about the limit of $f \mathbin/ g$ into facts about the asymptotic relationship between $f$ and $g$. $$\begin{array}{lllllll}
\lim_{n→∞} f(n)\mathbin/g(n) \ne 0,∞ &\implies& f = Θ(g) &&
\lim_{n→∞} f(n)\mathbin/g(n) = 1 &\implies& f ∼ g \\
\lim_{n→∞} f(n)\mathbin/g(n) \ne ∞ &\implies& f = O(g) &&
\lim_{n→∞} f(n)\mathbin/g(n) = 0 &\implies& f = o(g) \\
\lim_{n→∞} f(n)\mathbin/g(n) \ne 0 &\implies& f = Ω(g) &&
\lim_{n→∞} f(n)\mathbin/g(n) = ∞ &\implies& f = ω(g) \\
\end{array}$$
Reading between the lines, I think you may be misunderstanding Big O analysis as being a replacement for benchmarking. It's not. An engineer still needs to benchmark their code if they want to know how fast it is. And indeed in the real world, an $\mathcal{O}(n)$ algorithm might be slower than an $\mathcal{O}(n^2)$ algorithm on real-world data sets.
But, as $n$ approaches infinity, the $\mathcal{O}(n^2)$ algorithm will ultimately be slower. For the sake of example, if we allow constant factors in our Big-O notation, then an $\mathcal{O}(10n)$ algorithm will take more "steps" than an $\mathcal{O}(n^2)$ algorithm, if $n$ is less than $10$ ($10\cdot 10 = 10^2$).
But if $n$ is $100$, then the $\mathcal{O}(n^2)$ algorithm takes ten times as long. If $n$ is $1000$, it takes a hundred times as long. As $n$ grows, so too does this difference. That manner in which the two algorithms differ is what we are analyzing when we use Big O analysis.
Hopefully that example also makes it clear why the constant factor is irrelevant and can be ignored. Whether it's ten, a hundred, a thousand, a million, or a quadrillion ultimately does not matter, because as $n$ approaches infinity, the $\mathcal{O}(n^2)$ algorithm is eventually going to be slower anyway. That's the power of exponential growth.
The crux of it is that Big O analysis is a mathematical concept which does NOT tell an engineer how fast something is going to run or how many steps it's going to take. That's what benchmarking is for. Big O analysis is still a very helpful tool in algorithm design, but if you're interested in exactly how long something takes to run, you'll need to look elsewhere.
Best Answer
Computing a simple function that characterizes the behavior of an algorithm for any input is difficult. So upper and lower bounds are used, in addition to probabilistic methods.
The term 'asymptotic' is used because the bound is typically only valid for large $n$, as an accurate formula for small $n$ is more complicated, and not really interesting from a performance perspective.
To answer your question, you can prove that any working comparison sort takes a lower bound of $\Omega(n \lg n)$ on running time. Consequently, if you can show that a sorting algorithm (such as heapsort or merge sort) has an upper bound of $O(n \lg n)$ on running time, then in fact you have a running time of $\Theta(n \lg n)$ which is, by definition, an asymptotically tight bound for the running time (See Cormen, Leiserson & Rivest's "Introduction to Algorithms", Chapter 2, Theorem 2.1, and the section on sorting).
Quicksort has a worst-case running time of $\Theta(n^2)$, which is larger than the above bound, hence is not optimal.
Note, however, that quicksort has an average running time of $\Theta(n \lg n)$. (This, of course, assumes that all inputs of length $n$ are equally likely.)