Sense of big-O notation in real analysis

asymptoticsreal-analysis

I was solving some problems with big-O notations. I know the definition or how to prove it, but I still can't "feel" their main usage exactly in real analysis (not in computer science while investigating complexity of algorithm)
Like what does these expressions even mean: $2x-x^2 = O(x), x \to 0$ or $x \sin \frac{1}{x} = O(|x|), x \to 0$?
When we use little-o notation, $f = o(g), x \to x_0$, I understand that $f$ is infinitesimal of a higher order than $g$. But when it comes to $f = O(g), x \to x_0$, I can't somewhy see through definition its sense. I even plotted their graphs – still no clue

Best Answer

A slightly nonstandard but equivalent way to define the statement

$ f= O(g)$ as $x\to 0$

is to interpret $O$ as a wild-card symbol for some unspecified function that is bounded near $x=0$; and then you can interpret the expression $O(g)$ as the product $ O \cdot g$ ( the product of that bounded function and $g$.) This interpretation streamlines many formal manipulations involving such asymptotic relations, especially once you establish the "fundamental identities" $O \cdot O\subset O$ and $o\cdot O \subset o$.

P.S> To make this notation more rigorous, you can regard $O$ as the collection of all bounded functions, and then regard these fundamental identities as statements about any pair of elements chosen from these collections.

For example $ f(x)= 2x - x^2 = 2x (1- \frac{x}{2}) $ can be written as a bounded multiple of $ g(x)=2x$. The bounded expression $(1- \frac{x}{2})$ belongs to the collection denoted by $O$.

Related Question