If $f \in X^*$, with $X^*$ the dual space consisting of all linear bounded functionals on a linear normed space $X$. With the norm defined as $||f||_{X^{*}} = \sup_{||x|| \leqslant 1} |f(x)|$. Why does $|f(x)| \leqslant ||f||_{X^{*}} ||x||_{X}$ hold?
[Math] Norm in a dual space
functional-analysisinequalitynormed-spaces
Related Solutions
There are (at least) two somewhat conflicting definitions of a dual space.
In functional analysis, we start with a topological vector space $V$, usually over real or complex numbers, and then the (continuous) dual is the space of all continuous functionals. For normed (in particular, Banach) spaces, a functional is continuous if and only if it is bounded.
In abstract algebra, we deal with vector spaces over arbitrary fields. Frequently, the vector spaces are finite-dimensional, in which case the dual actually coincides with algebraic dual, but for infinite dimensional spaces (over real or complex numbers), the algebraic dual is usually larger by far than the continuous dual*.
More abstractly, the difference of definition is a matter of perspective. In terms of category theory, you can think of the dual of a $K$-vector space $V$ as the space $\operatorname{Hom}(V,K)$, i.e. the set of morphisms from $V$ to the base field $K$ in your category. If the category is that of vector spaces, you obtain the algebraic dual. If the category is that of topological vector spaces, you obtain the continuous dual.
*${}$a pure vector space can be regarded as a discrete topological vector space. In this case, it is fairly easy to see that the continuous dual and the algebraic dual coincide.
I have a super-naive interpretation that I find helpful. I think of the elements of $X$ as column vectors, and the elements of the dual, $X^*$ as row vectors. You can always multiply row vectors by column vectors, and each row vector gives a map from $X$ to $\mathbb{R}$.
In finite dimensions there is no canonical transformation of column vectors into row vectors, unless you choose a dot product. (If you haven't seen this before, then usual transpose operation corresponds to the usual dot product.) The same thing happens in infinite dimensions -- there is no canonical transformation unless you are in a Hilbert space. (The one thing that's different is that usually in infinite dimensions there's no transformation at all.)
The analogy is particularly clear for $L^p$ spaces. If you think of integration as fancy sums, then multiplying a row vector by a column vector in finite dimensions, $$ \sum_{i=1}^N v_i w_i, $$ becomes an integral for $f$ in $L^p$ and $g$ in its dual $L^q$, $$ \int f(x) g(x) dx. $$ The fact that they are all of this form is a non-trivial theorem, though, and there are elementary examples of function spaces where it fails.
Differentiation is more like a function from $X$ to $X$, so it's not a functional. You can define a functional by evaluating the derivative at a single point. Operators like differentiation on $L^p$ don't count not only because they are not continuous, but because they aren't defined on all of the space. This is usually the more important failure, and if you need differentiation for your application, you define a different function space that makes differentiation a continuous operator defined on that whole space. (Alternatively there's a weaker property than continuity, a "closed linear operator" and you can develop a theory for these.)
Best Answer
Suppose $x\in X$ with $x\neq 0$. Then $y = x/\|x\|$ satisfies $\|y\| = 1$, and hence $|f(y)|\leq \|f\|$. But $f(y) = f(x/\|x\|) = f(x)/\|x\|$, so $|f(x)|/\|x\|\leq \|f\|$. This proves the inequality $|f(x)|\leq \|f\|\|x\|$ when $x\neq 0$. The inequality is trivial with $x = 0$.